• Title/Summary/Keyword: in-memory computing

Search Result 766, Processing Time 0.03 seconds

Lightweight Single Image Super-Resolution by Channel Split Residual Convolution

  • Liu, Buzhong
    • Journal of Information Processing Systems
    • /
    • v.18 no.1
    • /
    • pp.12-25
    • /
    • 2022
  • In recent years, deep convolutional neural networks have made significant progress in the research of single image super-resolution. However, it is difficult to be applied in practical computing terminals or embedded devices due to a large number of parameters and computational effort. To balance these problems, we propose CSRNet, a lightweight neural network based on channel split residual learning structure, to reconstruct highresolution images from low-resolution images. Lightweight refers to designing a neural network with fewer parameters and a simplified structure for lower memory consumption and faster inference speed. At the same time, it is ensured that the performance of recovering high-resolution images is not degraded. In CSRNet, we reduce the parameters and computation by channel split residual learning. Simultaneously, we propose a double-upsampling network structure to improve the performance of the lightweight super-resolution network and make it easy to train. Finally, we propose a new evaluation metric for the lightweight approaches named 100_FPS. Experiments show that our proposed CSRNet not only speeds up the inference of the neural network and reduces memory consumption, but also performs well on single image super-resolution.

Observer-Teacher-Learner-Based Optimization: An enhanced meta-heuristic for structural sizing design

  • Shahrouzi, Mohsen;Aghabaglou, Mahdi;Rafiee, Fataneh
    • Structural Engineering and Mechanics
    • /
    • v.62 no.5
    • /
    • pp.537-550
    • /
    • 2017
  • Structural sizing is a rewarding task due to its non-convex constrained nature in the design space. In order to provide both global exploration and proper search refinement, a hybrid method is developed here based on outstanding features of Evolutionary Computing and Teaching-Learning-Based Optimization. The new method introduces an observer phase for memory exploitation in addition to vector-sum movements in the original teacher and learner phases. Proper integer coding is suited and applied for structural size optimization together with a fly-to-boundary technique and an elitism strategy. Performance of the proposed method is further evaluated treating a number of truss examples compared with teaching-learning-based optimization. The results show enhanced capability of the method in efficient and stable convergence toward the optimum and effective capturing of high quality solutions in discrete structural sizing problems.

Dynamic Memory Allocation for Scientific Workflows in Containers (컨테이너 환경에서의 과학 워크플로우를 위한 동적 메모리 할당)

  • Adufu, Theodora;Choi, Jieun;Kim, Yoonhee
    • Journal of KIISE
    • /
    • v.44 no.5
    • /
    • pp.439-448
    • /
    • 2017
  • The workloads of large high-performance computing (HPC) scientific applications are steadily becoming "bursty" due to variable resource demands throughout their execution life-cycles. However, the over-provisioning of virtual resources for optimal performance during execution remains a key challenge in the scheduling of scientific HPC applications. While over-provisioning of virtual resources guarantees peak performance of scientific application in virtualized environments, it results in increased amounts of idle resources that are unavailable for use by other applications. Herein, we proposed a memory resource reconfiguration approach that allows the quick release of idle memory resources for new applications in OS-level virtualized systems, based on the applications resource-usage pattern profile data. We deployed a scientific workflow application in Docker, a light-weight OS-level virtualized system. In the proposed approach, memory allocation is fine-tuned to containers at each stage of the workflows execution life-cycle. Thus, overall memory resource utilization is improved.

Java Memory Model Simulation using SMT Solver (SMT 해결기를 이용한 자바 메모리 모델 시뮬레이션)

  • Lee, Tae-Hoon;Kwon, Gi-Hwon
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.15 no.1
    • /
    • pp.62-66
    • /
    • 2009
  • Recently developed compilers perform some optimizations in order to speed up the execution time of source program. These optimizations require the transformation of the sequence of program statements. This transformation does not give any problems in a single-threaded program. However, the transformation gives some significant errors in a multi-threaded program. State-of-the-art model checkers such as Java-Pathfinder do not consider the transformation resulted in the optimization step in a compiler since they just consider a single memory model. In this paper, we describe a new technique which is based on SMT solver. The Java Memory Model Simulator based on SMT Solver can compute all possible output of given multi-thread program within one second which, in contrast, Traditional Java Memory Model Simulator takes one minute.

A Study on System for Reusing of Wireless Internet Contents (무선 인터넷 컨텐츠의 재사용을 위한 시스템 연구)

  • Kim Jeong-Hoon
    • Journal of Internet Computing and Services
    • /
    • v.4 no.1
    • /
    • pp.1-8
    • /
    • 2003
  • Nowadays, various wireless internet contents are serviced in a cellular phone. However, because of the memory limit of the cellular phone, you should delete the wireless internet contents periodically. Also it has been quite inconvenient to move the existing data to a new cellular phone. In this paper I proposed MobileFolder to reuse wireless internet contents and provided a simple client application in a VM environment.

  • PDF

Innovative Solutions for Design and Fabrication of Deep Learning Based Soft Sensor

  • Khdhir, Radhia;Belghith, Aymen
    • International Journal of Computer Science & Network Security
    • /
    • v.22 no.2
    • /
    • pp.131-138
    • /
    • 2022
  • Soft sensors are used to anticipate complicated model parameters using data from classifiers that are comparatively easy to gather. The goal of this study is to use artificial intelligence techniques to design and build soft sensors. The combination of a Long Short-Term Memory (LSTM) network and Grey Wolf Optimization (GWO) is used to create a unique soft sensor. LSTM is developed to tackle linear model with strong nonlinearity and unpredictability of manufacturing applications in the learning approach. GWO is used to accomplish input optimization technique for LSTM in order to reduce the model's inappropriate complication. The newly designed soft sensor originally brought LSTM's superior dynamic modeling with GWO's exact variable selection. The performance of our proposal is demonstrated using simulations on real-world datasets.

A Study on Design and Cache Replacement Policy for Cascaded Cache Based on Non-Volatile Memories (비휘발성 메모리 시스템을 위한 저전력 연쇄 캐시 구조 및 최적화된 캐시 교체 정책에 대한 연구)

  • Juhee Choi
    • Journal of the Semiconductor & Display Technology
    • /
    • v.22 no.3
    • /
    • pp.106-111
    • /
    • 2023
  • The importance of load-to-use latency has been highlighted as state-of-the-art computing cores adopt deep pipelines and high clock frequencies. The cascaded cache was recently proposed to reduce the access cycle of the L1 cache by utilizing differences in latencies among banks of the cache structure. However, this study assumes the cache is comprised of SRAM, making it unsuitable for direct application to non-volatile memory-based systems. This paper proposes a novel mechanism and structure for lowering dynamic energy consumption. It inserts monitoring logic to keep track of swap operations and write counts. If the ratio of swap operations to total write counts surpasses a set threshold, the cache controller skips the swap of cache blocks, which leads to reducing write operations. To validate this approach, experiments are conducted on the non-volatile memory-based cascaded cache. The results show a reduction in write operations by an average of 16.7% with a negligible increase in latencies.

  • PDF

An Efficient Buffer Page Replacement Strategy for System Software on Flash Memory (플래시 메모리상에서 시스템 소프트웨어의 효율적인 버퍼 페이지 교체 기법)

  • Park, Jong-Min;Park, Dong-Joo
    • Journal of KIISE:Databases
    • /
    • v.34 no.2
    • /
    • pp.133-140
    • /
    • 2007
  • Flash memory has penetrated our life in various forms. For example, flash memory is important storage component of ubiquitous computing or mobile products such as cell phone, MP3 player, PDA, and portable storage kits. Behind of the wide acceptance as memory is many advantages of flash memory: for instances, low power consumption, nonvolatile, stability and portability. In addition to mentioned strengths, the recent development of gigabyte range capacity flash memory makes a careful prediction that the flash memory might replace some of storage area dominated by hard disks. In order to have overwriting function, one block must be erased before overwriting is performed. This difference results in the cost of reading, writing and erasing in flash memory[1][5][6]. Since this difference has not been considered in traditional buffer replacement technologies adopted in system software such as OS and DBMS, a new buffer replacement strategy becomes necessary. In this paper, a new buffer replacement strategy, reflecting difference I/O cost and applicable to flash memory, suggest and compares with other buffer replacement strategies using workloads as Zipfian distribution and real data.

A Study of the Efficient Cloud Migration Technique and Process based on Open Source Software (오픈 소스 기반의 효율적인 클라우드 마이그레이션 절차에 관한 연구)

  • Park, In-Geun;Lee, Eun-Seok;Park, Jong-Kook;Kim, Jong-Bae
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2014.05a
    • /
    • pp.280-283
    • /
    • 2014
  • Cloud Computing virtualizes logical resources like cpu, memory and disks etcs from physical machines. This virtualization technology increases computing resource utilization and supports dynamic resource allocations. Because of these benefits, global cloud computing services like Amazon AWS, Google Cloud and Apple iCloud are prevalent. With this cloud Computing services, there has been a request about cloud migration between different cloud environments. If one service which operates in a cloud computing environment wants to migrate to another cloud environment, there should have been a compitability between two different cloud environment. but even global cloud computing services do not support this compitability. In this paper, we suggest a process and technology to cloud migrations based on open sources.

  • PDF

Visualization of Internal Electric Field on Plasma (플라즈마 내부 전기장 가시화)

  • Shin, Han Sol;Yu, Tae Jun;Lee, Kun
    • Journal of Korea Multimedia Society
    • /
    • v.19 no.1
    • /
    • pp.80-85
    • /
    • 2016
  • It costs high in both memory usage and time consuming to sample the space to compute charge density and calculate electric field on that with large size of plasma data. In real-time and interactive application, accelerating the compute time is critical problem. In this paper, we suggest new method to visualize electric field by using convolution theorem, and the parallel computing to accelerate computing time by using GPGPU. We conduct a simulation that compare running time between the methods with convolution and without convolution. We discussed the method of visualization of multivariate data in three dimensional space using colored volume rendering and surface construction.