• Title/Summary/Keyword: Computing amount

Search Result 694, Processing Time 0.026 seconds

Privacy-Preserving IoT Data Collection in Fog-Cloud Computing Environment

  • Lim, Jong-Hyun;Kim, Jong Wook
    • Journal of the Korea Society of Computer and Information
    • /
    • v.24 no.9
    • /
    • pp.43-49
    • /
    • 2019
  • Today, with the development of the internet of things, wearable devices related to personal health care have become widespread. Various global information and communication technology companies are developing various wearable health devices, which can collect personal health information such as heart rate, steps, and calories, using sensors built into the device. However, since individual health data includes sensitive information, the collection of irrelevant health data can lead to personal privacy issue. Therefore, there is a growing need to develop technology for collecting sensitive health data from wearable health devices, while preserving privacy. In recent years, local differential privacy (LDP), which enables sensitive data collection while preserving privacy, has attracted much attention. In this paper, we develop a technology for collecting vast amount of health data from a smartwatch device, which is one of popular wearable health devices, using local difference privacy. Experiment results with real data show that the proposed method is able to effectively collect sensitive health data from smartwatch users, while preserving privacy.

Trends in Ultra Low Power Intelligent Edge Semiconductor Technology (초저전력 엣지 지능형반도체 기술 동향)

  • Oh, K.I.;Kim, S.E.;Bae, Y.H.;Park, S.M.;Lee, J.J.;Kang, S.W.
    • Electronics and Telecommunications Trends
    • /
    • v.33 no.6
    • /
    • pp.24-33
    • /
    • 2018
  • In the age of IoT, in which everything is connected to a network, there have been increases in the amount of data traffic, latency, and the risk of personal privacy breaches that conventional cloud computing technology cannot cope with. The idea of edge computing has emerged as a solution to these issues, and furthermore, the concept of ultra-low power edge intelligent semiconductors in which the IoT device itself performs intelligent decisions and processes data has been established. The key elements of this function are an intelligent semiconductor based on artificial intelligence, connectivity for the efficient connection of neurons and synapses, and a large-scale spiking neural network simulation framework for the performance prediction of a neural network. This paper covers the current trends in ultra-low power edge intelligent semiconductors including issues regarding their technology and application.

Communication Resource Allocation Strategy of Internet of Vehicles Based on MEC

  • Ma, Zhiqiang
    • Journal of Information Processing Systems
    • /
    • v.18 no.3
    • /
    • pp.389-401
    • /
    • 2022
  • The business of Internet of Vehicles (IoV) is growing rapidly, and the large amount of data exchange has caused problems of large mobile network communication delay and large energy loss. A strategy for resource allocation of IoV communication based on mobile edge computing (MEC) is thus proposed. First, a model of the cloud-side collaborative cache and resource allocation system for the IoV is designed. Vehicles can offload tasks to MEC servers or neighboring vehicles for communication. Then, the communication model and the calculation model of IoV system are comprehensively analyzed. The optimization objective of minimizing delay and energy consumption is constructed. Finally, the on-board computing task is coded, and the optimization problem is transformed into a knapsack problem. The optimal resource allocation strategy is obtained through genetic algorithm. The simulation results based on the MATLAB platform show that: The proposed strategy offloads tasks to the MEC server or neighboring vehicles, making full use of system resources. In different situations, the energy consumption does not exceed 300 J and 180 J, with an average delay of 210 ms, effectively reducing system overhead and improving response speed.

Effects Of Computer - Based Information Load On Market Categorization Decision: An Experiment (컴퓨터 정보의 부하가 시장분류 의사결정에 미치는 영향: 실험연구)

  • Jo, Nam-Jae
    • Asia pacific journal of information systems
    • /
    • v.4 no.2
    • /
    • pp.214-246
    • /
    • 1994
  • As the use of information technology continues to bring a dramatic increase in the amount of data available to managers, researchers have noted that having too much data can be as much of a problem as having too little. It becomes very important to understand the effects of "information explosion" on the way managers perform their work. This study examines the effect of the amount of available data on the process and outcome of thinking within a context where managers are equipped with computing tools. The purpose of this study is to better understand how managers respond cognitively to increased information availability. In this experiment with 104 MBAs three groups of subjects were asked to identify high and low potential market categories for effective direct mail sales based on three different amount of computer-based socioeconomic data designed based on existing research on cognition and information overload. Analyses of the outcomes showed that the group with medium amount of data used data and computer-based analysis tools most effectively and efficiently. We expect that the study will provide us a base to relate future MIS research to theories on cognition in such related fields as psychology and organizational behavior.

  • PDF

Performance Optimization of Numerical Ocean Modeling on Cloud Systems (클라우드 시스템에서 해양수치모델 성능 최적화)

  • JUNG, KWANGWOOG;CHO, YANG-KI;TAK, YONG-JIN
    • The Sea:JOURNAL OF THE KOREAN SOCIETY OF OCEANOGRAPHY
    • /
    • v.27 no.3
    • /
    • pp.127-143
    • /
    • 2022
  • Recently, many attempts to run numerical ocean models in cloud computing environments have been tried actively. A cloud computing environment can be an effective means to implement numerical ocean models requiring a large-scale resource or quickly preparing modeling environment for global or large-scale grids. Many commercial and private cloud computing systems provide technologies such as virtualization, high-performance CPUs and instances, ether-net based high-performance-networking, and remote direct memory access for High Performance Computing (HPC). These new features facilitate ocean modeling experimentation on commercial cloud computing systems. Many scientists and engineers expect cloud computing to become mainstream in the near future. Analysis of the performance and features of commercial cloud services for numerical modeling is essential in order to select appropriate systems as this can help to minimize execution time and the amount of resources utilized. The effect of cache memory is large in the processing structure of the ocean numerical model, which processes input/output of data in a multidimensional array structure, and the speed of the network is important due to the communication characteristics through which a large amount of data moves. In this study, the performance of the Regional Ocean Modeling System (ROMS), the High Performance Linpack (HPL) benchmarking software package, and STREAM, the memory benchmark were evaluated and compared on commercial cloud systems to provide information for the transition of other ocean models into cloud computing. Through analysis of actual performance data and configuration settings obtained from virtualization-based commercial clouds, we evaluated the efficiency of the computer resources for the various model grid sizes in the virtualization-based cloud systems. We found that cache hierarchy and capacity are crucial in the performance of ROMS using huge memory. The memory latency time is also important in the performance. Increasing the number of cores to reduce the running time for numerical modeling is more effective with large grid sizes than with small grid sizes. Our analysis results will be helpful as a reference for constructing the best computing system in the cloud to minimize time and cost for numerical ocean modeling.

Development of Web-based High Throughput Computing Environment and Its Applications (웹기반 대용량 계산환경 구축 및 응용연구)

  • Jeong, Min-Joong;Kim, Byung-Sang
    • Journal of the Computational Structural Engineering Institute of Korea
    • /
    • v.20 no.3
    • /
    • pp.365-370
    • /
    • 2007
  • Many engineering problems often require the large amount of computing resources for iterative simulations of problems treating many parameters and input files. In order to overcome the situation, this paper proposes an e-Science based computational system. The system exploits the Grid computing technology to establish an integrated web service environment which supports distributed high throughput computational simulations and remote executions. The proposed system provides an easy-to-use parametric study service where a computational service includes real time monitoring. To verify usability of the proposed system, two kinds of applications were introduced. The first application is an Aerospace Integrated Research System (e-AIRS). The e-AIRS adapts the proposed computational system to solve CFD problems. The second one is design and optimization of protein 3-dimensional structures in structural biology.

Image Quality Assessment Considering both Computing Speed and Robustness to Distortions (계산 속도와 왜곡 강인성을 동시 고려한 이미지 품질 평가)

  • Kim, Suk-Won;Hong, Seongwoo;Jin, Jeong-Chan;Kim, Young-Jin
    • Journal of KIISE
    • /
    • v.44 no.9
    • /
    • pp.992-1004
    • /
    • 2017
  • To assess image quality accurately, an image quality assessment (IQA) metric is required to reflect the human visual system (HVS) properly. In other words, the structure, color, and contrast ratio of the image should be evaluated in consideration of various factors. In addition, as mobile embedded devices such as smartphone become popular, a fast computing speed is important. In this paper, the proposed IQA metric combines color similarity, gradient similarity, and phase similarity synergistically to satisfy the HVS and is designed by using optimized pooling and quantization for fast computation. The proposed IQA metric is compared against existing 13 methods using 4 kinds of evaluation methods. The experimental results show that the proposed IQA metric ranks the first on 3 evaluation methods and the first on the remaining method, next to VSI which is the most remarkable IQA metric. Its computing speed is on average about 20% faster than VSI's. In addition, we find that the proposed IQA metric has a bigger amount of correlation with the HVS than existing IQA metrics.

An Efficient Method for Determining Work Process Number of Each Node on Computation Grid (계산 그리드 상에서 각 노드의 작업 프로세스 수를 결정하기 위한 효율적인 방법)

  • Kim Young-Hak;Cho Soo-Hyun
    • The Journal of the Korea Contents Association
    • /
    • v.5 no.1
    • /
    • pp.189-199
    • /
    • 2005
  • The grid computing is a technique to solve big problems such as a field of scientific technique by sharing the computing power and a big storage space of the numerous computers on the distributed network. The environment of the grid computing is composed with the WAN which has a different performance and a heterogeneous network condition. Therefore, it is more important to reflect heterogeneous performance elements to calculation work. In this paper, we propose an efficient method that decides work process number of each node by considering a network state information. The network state information considers the latency, the bandwidth and latency-bandwidth mixture information. First, using information which was measured, we compute the performance ratio and decide work process number of each node. Finally, RSL file was created automatically based on work process number which was decided, and then accomplishes a work. The network performance information is collected by the NWS. According to experimental results, the method which was considered of network performance information is improved respectively 23%, 31%, and 57%, compared to the methods of existing in a viewpoint of work amount, work process number, and node number.

  • PDF

Maximizing Concurrency and Analyzable Timing Behavior in Component-Oriented Real-Time Distributed Computing Application Systems

  • Kim, Kwang-Hee Kane;Colmenares, Juan A.
    • Journal of Computing Science and Engineering
    • /
    • v.1 no.1
    • /
    • pp.56-73
    • /
    • 2007
  • Demands have been growing in safety-critical application fields for producing networked real-time embedded computing (NREC) systems together with acceptable assurances of tight service time bounds (STBs). Here a service time can be defined as the amount of time that the NREC system could take in accepting a request, executing an appropriate service method, and returning a valid result. Enabling systematic composition of large-scale NREC systems with STB certifications has been recognized as a highly desirable goal by the research community for many years. An appealing approach for pursuing such a goal is to establish a hard-real-time (HRT) component model that contains its own STB as an integral part. The TMO (Time-Triggered Message-Triggered Object) programming scheme is one HRT distributed computing (DC) component model established by the first co-author and his collaborators over the past 15 years. The TMO programming scheme has been intended to be an advanced high-level RT DC programming scheme that enables development of NREC systems and validation of tight STBs of such systems with efforts far smaller than those required when any existing lower-level RT DC programming scheme is used. An additional goal is to enable maximum exploitation of concurrency without damaging any major structuring and execution approaches adopted for meeting the first two goals. A number of previously untried program structuring approaches and execution rules were adopted from the early development stage of the TMO scheme. This paper presents new concrete justifications for those approaches and rules, and also discusses new extensions of the TMO scheme intended to enable further exploitation of concurrency in NREC system design and programming.

A Job Allocation Manager for Dynamic Remote Execution of Distributed Jobs in P2P Network (분산처리 작업의 동적 원격실행을 위한 P2P 기반 작업 할당 관리자)

  • Lee, Seung-Ha;Kim, Yang-Woo
    • Journal of Internet Computing and Services
    • /
    • v.7 no.6
    • /
    • pp.87-103
    • /
    • 2006
  • Advances in computer and network technology provide new computing environment that were only possible with supercomputers before. In order to provide the environment, a distributed runtime system has to be provided, but most of the conventional distributed runtime systems lack in providing dynamic and flexible system reconfiguration depending on workload variance, due to a static architecture of fixed master node and slave working nodes. This paper proposes and implements a new model for distributed job allocation and management which is a distributed runtime system is P2P environment for flexible and dynamic system reconfiguration. The implemented systems enables job program transfer and management, remote compile and execution among cooperative developers based on P2P standard protocol Jxta platform. Since it makes dynamic and flexible system reconfiguration possible, the proposed method has some advantages in that it can collect and utilize idle computing resources immediately at a needed time for distributed job processing. Moreover, the implemented system's effectiveness and performance increase are shown by applying and processing the crawler jobs, in a distributed way, for collecting a large amount of data needed for internet search.

  • PDF