• 제목/요약/키워드: analytical workloads

검색결과 4건 처리시간 0.018초

MLPPI Wizard: An Automated Multi-level Partitioning Tool on Analytical Workloads

  • Suh, Young-Kyoon;Crolotte, Alain;Kostamaa, Pekka
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제12권4호
    • /
    • pp.1693-1713
    • /
    • 2018
  • An important technique used by database administrators (DBAs) is to improve performance in decision-support workloads associated with a Star schema is multi-level partitioning. Queries will then benefit from performance improvements via partition elimination, due to constraints on queries expressed on the dimension tables. As the task of multi-level partitioning can be overwhelming for a DBA we are proposing a wizard that facilitates the task by calculating a partitioning scheme for a particular workload. The system resides completely on a client and interacts with the costing estimation subsystem of the query optimizer via an API over the network, thereby eliminating any need to make changes to the optimizer. In addition, since only cost estimates are needed the wizard overhead is very low. By using a greedy algorithm for search space enumeration over the query predicates in the workload the wizard is efficient with worst-case polynomial complexity. The technology proposed can be applied to any clustering or partitioning scheme in any database management system that provides an interface to the query optimizer. Applied to the Teradata database the technology provides recommendations that outperform a human expert's solution as measured by the total execution time of the workload. We also demonstrate the scalability of our approach when the fact table (and workload) size increases.

Employee's Negative Psychological Factors Based on Excessive Workloads and Its Solutions Using Consultation with the Manager

  • PARK, Hye-Ryoung;KIM, Seong-Gon
    • 동아시아경상학회지
    • /
    • 제10권1호
    • /
    • pp.59-69
    • /
    • 2022
  • Purpose - Burnouts cause the workers to quit their jobs because with the heavy workloads that the employees get subjected to, they feel that they have little control over what they have to accomplish in the workplace. The purpose of this research is to provide adequate solutions using brief consultation process, reducing negative psychological factors. Research design, Data, and methodology - The current research conducted the 'Qualitative Content Analysis' (QCA), which is one of the most employed analytical tools; it has been used widely all over the globe in various research applications in library science and information. Primarily, this analysis is often used as a method in the quantitative tool until the recent decade. Result - Based on ultimate systematic literature analysis, excessive workloads can get addressed by finding proper solutions to the issues of depression, anxiety, irritability, and discouragement. The solutions are (1) Combating Excessive Workloads using Effective Employee Selection, (2) Employee Effective training, and (3) Job redesigning. Conclusion - Selecting or recruiting employees that have skills for the given job also makes it possible for the organization to run its employees effectively and with minimal cases of workload as an organization understands the capabilities and capacities of workload an employee can complete.

An Analytical Approach to Evaluation of SSD Effects under MapReduce Workloads

  • Ahn, Sungyong;Park, Sangkyu
    • JSTS:Journal of Semiconductor Technology and Science
    • /
    • 제15권5호
    • /
    • pp.511-518
    • /
    • 2015
  • As the cost-per-byte of SSDs dramatically decreases, the introduction of SSDs to Hadoop becomes an attractive choice for high performance data processing. In this paper the cost-per-performance of SSD-based Hadoop cluster (SSD-Hadoop) and HDD-based Hadoop cluster (HDD-Hadoop) are evaluated. For this, we propose a MapReduce performance model using queuing network to simulate the execution time of MapReduce job with varying cluster size. To achieve an accurate model, the execution time distribution of MapReduce job is carefully profiled. The developed model can precisely predict the execution time of MapReduce jobs with less than 7% difference for most cases. It is also found that SSD-Hadoop is 20% more cost efficient than HDD-Hadoop because SSD-Hadoop needs a smaller number of nodes than HDD-Hadoop to achieve a comparable performance, according to the results of simulation with varying the number of cluster nodes.

Job Shop Scheduling에서 동일한 작업장에 대한 재투입 허용이 미치는 영향분석 (An Investigation of the Effect of Re-entrance to the Same Station in a Job Shop Scheduling)

  • 문덕희;최연혁;신양우
    • 산업경영시스템학회지
    • /
    • 제21권47호
    • /
    • pp.125-138
    • /
    • 1998
  • In this paper, we investigate the effect of re-entrance to the same work station in a job shop with multiple identical machines. System A is defined as the system in which re-entrance is not permitted, and system B is defined as the system in which re-entrance is permitted. By the analytical result of the queueing network, we find that the two systems have the same queue length distributions and utilizations under FIFO dispatching rule when all parameters are same. Simulation models are developed for various comparisons between the two systems, and simulation experiments are conducted for the combinations of five dispatching rules, two average workloads and two due date allowances. Five performance measures are selected for the comparison. The simulation results show that permitting re-entrance affects for some combinations of system environments.

  • PDF