• Title/Summary/Keyword: Computing amount

Search Result 689, Processing Time 0.027 seconds

AN EFFICIENT INCOMPRESSIBLE FREE SURFACE FLOW SIMULATION USING GPU (GPU를 이용한 효율적인 비압축성 자유표면유동 해석)

  • Hong, H.E.;Ahn, H.T.;Myung, H.J.
    • Journal of computational fluids engineering
    • /
    • v.17 no.2
    • /
    • pp.35-41
    • /
    • 2012
  • This paper presents incompressible Navier-Stokes solution algorithm for 2D Free-surface flow problems on the Cartesian mesh, which was implemented to run on Graphics Processing Units(GPU). The INS solver utilizes the variable arrangement on the Cartesian mesh, Finite Volume discretization along Constrained Interpolation Profile-Conservative Semi-Lagrangian(CIP-CSL). Solution procedure of incompressible Navier-Stokes equations for free-surface flow takes considerable amount of computation time and memory space even in modern multi-core computing architecture based on Central Processing Units(CPUs). By the recent development of computer architecture technology, Graphics Processing Unit(GPU)'s scientific computing performance outperforms that of CPU's. This paper focus on the utilization of GPU's high performance computing capability, and presents an efficient solution algorithm for free surface flow simulation. The performance of the GPU implementations with double precision accuracy is compared to that of the CPU code using an representative free-surface flow problem, namely. dam-break problem.

Mobile Monitoring System for Large Scale Scientific Computing Center (대규모 과학계산 컴퓨팅센터를 위한 모바일 모니터링 시스템)

  • Choi, Min
    • Journal of Convergence Society for SMB
    • /
    • v.2 no.1
    • /
    • pp.41-50
    • /
    • 2012
  • In this research, we developed a scalable resource monitoring system for large scale scientific computing data centers. Usually, there are limitations and overheads for keeping track of every computing nodes because of the huge number of computing nodes. So, this research proposes a layered summarizing techniques during collection of all system resource information. The technique results in improved scalability by reducing the amount of information at higher layer. Our prototype system which is implemented with web service is applicable with the HTML5 mobile web technology on smart devices.

  • PDF

A Dynamic Packet Recovery Mechanism for Realtime Service in Mobile Computing Environments

  • Park, Kwang-Roh;Oh, Yeun-Joo;Lim, Kyung-Shik;Cho, Kyoung-Rok
    • ETRI Journal
    • /
    • v.25 no.5
    • /
    • pp.356-368
    • /
    • 2003
  • This paper analyzes the characteristics of packet losses in mobile computing environments based on the Gilbert model and then describes a mechanism that can recover the lost audio packets using redundant data. Using information periodically reported by a receiver, the sender dynamically adjusts the amount and offset values of redundant data with the constraint of minimizing the bandwidth consumption of wireless links. Since mobile computing environments can be often characterized by frequent and consecutive packet losses, loss recovery mechanism need to deal efficiently with both random and consecutive packet losses. To achieve this, the suggested mechanism uses relatively large, discontinuous exponential offset values. That gives the same effect as using both the sequential and interleaving redundant information. To verify the effectiveness of the mechanism, we extended and implemented RTP/RTCP and applications. The experimental results show that our mechanism, with an exponential offset, achieves a remarkably low complete packet loss rate and adapts dynamically to the fluctuation of the packet loss pattern in mobile computing environments.

  • PDF

Topology-based Workflow Scheduling in Commercial Clouds

  • Ji, Haoran;Bao, Weidong;Zhu, Xiaomin;Xiao, Wenhua
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.9 no.11
    • /
    • pp.4311-4330
    • /
    • 2015
  • Cloud computing has become a new paradigm by enabling on-demand provisioning of applications, platforms or computing resources for clients. Workflow scheduling has always been treated as one of the most challenging problems in clouds. Commercial clouds have been widely used in scientific research, such as biology, astronomy and weather forecasting. Certainly, it is very important for a cloud service provider to pursue the profits for the commercial essence of clouds. This is also significantly important for the case of providing services to workflow tasks. In this paper, we address the issues of workflow scheduling in commercial clouds. This work takes the communication into account, which has always been ignored. And then, a topology-based workflow-scheduling algorithm named Resource Auction Algorithm (REAL) is proposed in the objective of getting more profits. The algorithm gives a good performance on searching for the optimum schedule for a sample workflow. Also, we find that there exists a certain resource amount, which gets the most profits to help us get more enthusiasm for further developing the research. Experimental results demonstrate that the analysis of the strategies for most profits is reasonable, and REAL gives a good performance on efficiently getting an optimized scheme with low computing complexity.

Defending Non-control-data Attacks using Influence Domain Monitoring

  • Zhang, Guimin;Li, Qingbao;Chen, Zhifeng;Zhang, Ping
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.12 no.8
    • /
    • pp.3888-3910
    • /
    • 2018
  • As an increasing number of defense methods against control-data attacks are deployed in practice, control-data attacks have become challenging, and non-control-data attacks are on the rise. However, defense methods against non-control-data attacks are still deficient even though these attacks can produce damage as significant as that of control-data attacks. We present a method to defend against non-control-data attacks using influence domain monitoring (IDM). A definition of the data influence domain is first proposed to describe the characteristics of a variable during its life cycle. IDM extracts security-critical non-control data from the target program and then instruments the target for monitoring these variables' influence domains to ensure that corrupted variables will not be used as the attackers intend. Therefore, attackers may be able to modify the value of one security-critical variable by exploiting certain memory corruption vulnerabilities, but they will be prevented from using the variable for nefarious purposes. We evaluate a prototype implementation of IDM and use the experimental results to show that this method can defend against most known non-control-data attacks while imposing a moderate amount of performance overhead.

Cloud Computing to Improve JavaScript Processing Efficiency of Mobile Applications

  • Kim, Daewon
    • Journal of Information Processing Systems
    • /
    • v.13 no.4
    • /
    • pp.731-751
    • /
    • 2017
  • The burgeoning distribution of smartphone web applications based on various mobile environments is increasingly focusing on the performance of mobile applications implemented by JavaScript and HTML5 (Hyper Text Markup Language 5). If application software has a simple functional processing structure, then the problem is benign. However, browser loads are becoming more burdensome as the amount of JavaScript processing continues to increase. Processing time and capacity of the JavaScript in current mobile browsers are limited. As a solution, the Web Worker is designed to implement multi-threading. However, it cannot guarantee the computing ability as a native application on mobile devices, and is not sufficient to improve processing speed. The method proposed in this research overcomes the limitation of resources as a mobile client and guarantees performance by native application software by providing high computing service. It shifts the JavaScript process of a mobile device on to a cloud-based computer server. A performance evaluation experiment revealed the proposed algorithm to be up to 6 times faster in computing speed compared to the existing mobile browser's JavaScript process, and 3 to 6 times faster than Web Worker. In addition, memory usage was also less than the existing technology.

Implementing Efficient Camera ISP Filters on GPGPUs Using OpenCL (GPGPU 기반의 효율적인 카메라 ISP 구현)

  • Park, Jongtae;Facchini, Beron;Hong, Jingun;Burgstaller, Bernd
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2010.11a
    • /
    • pp.1784-1787
    • /
    • 2010
  • General Purpose Graphic Processing Unit (GPGPU) computing is a technique that utilizes the high-performance many-core processors of high-end graphic cards for general-purpose computations such as 3D graphics, video/image processing, computer vision, scientific computing, HPC and many more. GPGPUs offer a vast amount of raw computing power, but programming is extremely challenging because of hardware idiosyncrasies. The open computing language (OpenCL) has been proposed as a vendor-independent GPGPU programming interface. OpenCL is very close to the hardware and thus does little to increase GPGPU programmability. In this paper we present how a set of digital camera image signal processing (ISP) filters can be realized efficiently on GPGPUs using OpenCL. Although we found ISP filters to be memory-bound computations, our GPGPU implementations achieve speedups of up to a factor of 64.8 over their sequential counterparts. On GPGPUs, our proposed optimizations achieved speedups between 145% and 275% over their baseline GPGPU implementations. Our experiments have been conducted on a Geforce GTX 275; because of OpenCL we expect our optimizations to be applicable to other architectures as well.

Optimal Moving Pattern Extraction of the Moving Object for Efficient Resource Allocation (효율적 자원 배치를 위한 이동객체의 최적 이동패턴 추출)

  • Cho, Ho-Seong;Nam, Kwang-Woo;Jang, Min-Seok;Lee, Yon-Sik
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2021.10a
    • /
    • pp.689-692
    • /
    • 2021
  • This paper is a prior study to improve the efficiency of offloading based on mobile agents to optimize allocation of computing resources and reduce latency that support user proximity of application services in a Fog/Edge Computing (FEC) environment. We propose an algorithm that effectively reduces the execution time and the amount of memory required when extracting optimal moving patterns from the vast set of spatio-temporal movement history data of moving objects. The proposed algorithm can be useful for the distribution and deployment of computing resources for computation offloading in future FEC environments through frequency-based optimal path extraction.

  • PDF

Study on the Analysis of the Recognition and Improvements by Professors for the CAC(Computing Engineering Committee) (컴퓨터·정보(공)학 분야 공학교육인증제 운영성과에 대한 교수들의 인식 분석 및 개선방안 연구)

  • Han, Ji Young;Kang, So Yeon;Jeon, Ju Hyun
    • Journal of Engineering Education Research
    • /
    • v.19 no.5
    • /
    • pp.35-47
    • /
    • 2016
  • This study analyzed outcomes of CAC(Computing Accreditation Committee) program individually applied in the field of computing engineering since 2007, and draw improvements. Literature review through academic journals, survey research and the FGI(Focus Group Interview) were used to accomplish objectives of the study. In addition, the survey research and FGI were done for professors. For the survey research, nationally 20 out of 44 universities which operates the CAC program were investigated, and sample universities were considered by region. FGI was done to analyze the performance and problems of CAC in more detail for 6 experts. Results of the study were follows as; first, CAC program was activated through the Seoul Accord activation support business by government. Second, BSM(Basic Science and Math), engineering major and engineering design education have been strengthened compared with before and after of CAC introduction in the computing engineering field. Third, soft skills needed for students in the college of engineering have been organized in the professional general curriculum, and professors aware of improvement of ability of the students for the skills. The degree of satisfaction for the CAC program has been examined as normal level, but improvement of educational system and the overall quality enhancement of computing engineering education were affected by CAC program. Nonetheless of positive results of CAC program, incentive system for certification program graduates, the expansion of the autonomy of the department, reduction in the amount of self-evaluation report, and support of administrative human resources were suggested for taking root successfully of CAC program.

A Global Framework for Parallel and Distributed Application with Mobile Objects (이동 객체 기반 병렬 및 분산 응용 수행을 위한 전역 프레임워크)

  • Han, Youn-Hee;Park, Chan-Yeol;Hwang, Chong-Sun;Jeong, Young-Sik
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.6 no.6
    • /
    • pp.555-568
    • /
    • 2000
  • The World Wide Web has become the largest virtual system that is almost universal in scope. In recent research, it has become effective to utilize idle hosts existing in the World Wide Web for running applications that require a substantial amount of computation. This novel computing paradigm has been referred to as the advent of global computing. In this paper, we implement and propose a mobile object-based global computing framework called Tiger, whose primary goal is to present novel object-oriented programming libraries that support distribution, dispatching, migration of objects and concurrency among computational activities. The programming libraries provide programmers with access, location and migration transparency for distributed and mobile objects. Tiger's second goal is to provide a system supporting requisites for a global computing environment - scalability, resource and location management. The Tiger system and the programming libraries provided allow a programmer to easily develop an objectoriented parallel and distributed application using globally extended computing resources. We also present the improvement in performance gained by conducting the experiment with highly intensive computations such as parallel fractal image processing and genetic-neuro-fuzzy algorithms.

  • PDF