• Title/Summary/Keyword: in-memory computing

Search Result 766, Processing Time 0.03 seconds

Mobile Cloud Context-Awareness System based on Jess Inference and Semantic Web RL for Inference Cost Decline (추론 비용 감소를 위한 Jess 추론과 시멘틱 웹 RL기반의 모바일 클라우드 상황인식 시스템)

  • Jung, Se-Hoon;Sim, Chun-Bo
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.1 no.1
    • /
    • pp.19-30
    • /
    • 2012
  • The context aware service is the service to provide useful information to the users by recognizing surroundings around people who receive the service via computer based on computing and communication, and by conducting self-decision. But CAS(Context Awareness System) shows the weak point of small-scale context awareness processing capacity due to restricted mobile function under the current mobile environment, memory space, and inference cost increment. In this paper, we propose a mobile cloud context system with using Google App Engine based on PaaS(Platform as a Service) in order to get context service in various mobile devices without any subordination to any specific platform. Inference design method of the proposed system makes use of knowledge-based framework with semantic inference that is presented by SWRL rule and OWL ontology and Jess with rule-based inference engine. As well as, it is intended to shorten the context service reasoning time with mapping the regular reasoning of SWRL to Jess reasoning engine by connecting the values such as Class, Property and Individual which are regular information in the form of SWRL to Jess reasoning engine via JessTab plug-in in order to overcome the demerit of queries reasoning method of SparQL in semantic search which is a previous reasoning method.

Hardware Approach to Fuzzy Inference―ASIC and RISC―

  • Watanabe, Hiroyuki
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 1993.06a
    • /
    • pp.975-976
    • /
    • 1993
  • This talk presents the overview of the author's research and development activities on fuzzy inference hardware. We involved it with two distinct approaches. The first approach is to use application specific integrated circuits (ASIC) technology. The fuzzy inference method is directly implemented in silicon. The second approach, which is in its preliminary stage, is to use more conventional microprocessor architecture. Here, we use a quantitative technique used by designer of reduced instruction set computer (RISC) to modify an architecture of a microprocessor. In the ASIC approach, we implemented the most widely used fuzzy inference mechanism directly on silicon. The mechanism is beaded on a max-min compositional rule of inference, and Mandami's method of fuzzy implication. The two VLSI fuzzy inference chips are designed, fabricated, and fully tested. Both used a full-custom CMOS technology. The second and more claborate chip was designed at the University of North Carolina(U C) in cooperation with MCNC. Both VLSI chips had muliple datapaths for rule digital fuzzy inference chips had multiple datapaths for rule evaluation, and they executed multiple fuzzy if-then rules in parallel. The AT & T chip is the first digital fuzzy inference chip in the world. It ran with a 20 MHz clock cycle and achieved an approximately 80.000 Fuzzy Logical inferences Per Second (FLIPS). It stored and executed 16 fuzzy if-then rules. Since it was designed as a proof of concept prototype chip, it had minimal amount of peripheral logic for system integration. UNC/MCNC chip consists of 688,131 transistors of which 476,160 are used for RAM memory. It ran with a 10 MHz clock cycle. The chip has a 3-staged pipeline and initiates a computation of new inference every 64 cycle. This chip achieved an approximately 160,000 FLIPS. The new architecture have the following important improvements from the AT & T chip: Programmable rule set memory (RAM). On-chip fuzzification operation by a table lookup method. On-chip defuzzification operation by a centroid method. Reconfigurable architecture for processing two rule formats. RAM/datapath redundancy for higher yield It can store and execute 51 if-then rule of the following format: IF A and B and C and D Then Do E, and Then Do F. With this format, the chip takes four inputs and produces two outputs. By software reconfiguration, it can store and execute 102 if-then rules of the following simpler format using the same datapath: IF A and B Then Do E. With this format the chip takes two inputs and produces one outputs. We have built two VME-bus board systems based on this chip for Oak Ridge National Laboratory (ORNL). The board is now installed in a robot at ORNL. Researchers uses this board for experiment in autonomous robot navigation. The Fuzzy Logic system board places the Fuzzy chip into a VMEbus environment. High level C language functions hide the operational details of the board from the applications programme . The programmer treats rule memories and fuzzification function memories as local structures passed as parameters to the C functions. ASIC fuzzy inference hardware is extremely fast, but they are limited in generality. Many aspects of the design are limited or fixed. We have proposed to designing a are limited or fixed. We have proposed to designing a fuzzy information processor as an application specific processor using a quantitative approach. The quantitative approach was developed by RISC designers. In effect, we are interested in evaluating the effectiveness of a specialized RISC processor for fuzzy information processing. As the first step, we measured the possible speed-up of a fuzzy inference program based on if-then rules by an introduction of specialized instructions, i.e., min and max instructions. The minimum and maximum operations are heavily used in fuzzy logic applications as fuzzy intersection and union. We performed measurements using a MIPS R3000 as a base micropro essor. The initial result is encouraging. We can achieve as high as a 2.5 increase in inference speed if the R3000 had min and max instructions. Also, they are useful for speeding up other fuzzy operations such as bounded product and bounded sum. The embedded processor's main task is to control some device or process. It usually runs a single or a embedded processer to create an embedded processor for fuzzy control is very effective. Table I shows the measured speed of the inference by a MIPS R3000 microprocessor, a fictitious MIPS R3000 microprocessor with min and max instructions, and a UNC/MCNC ASIC fuzzy inference chip. The software that used on microprocessors is a simulator of the ASIC chip. The first row is the computation time in seconds of 6000 inferences using 51 rules where each fuzzy set is represented by an array of 64 elements. The second row is the time required to perform a single inference. The last row is the fuzzy logical inferences per second (FLIPS) measured for ach device. There is a large gap in run time between the ASIC and software approaches even if we resort to a specialized fuzzy microprocessor. As for design time and cost, these two approaches represent two extremes. An ASIC approach is extremely expensive. It is, therefore, an important research topic to design a specialized computing architecture for fuzzy applications that falls between these two extremes both in run time and design time/cost. TABLEI INFERENCE TIME BY 51 RULES {{{{Time }}{{MIPS R3000 }}{{ASIC }}{{Regular }}{{With min/mix }}{{6000 inference 1 inference FLIPS }}{{125s 20.8ms 48 }}{{49s 8.2ms 122 }}{{0.0038s 6.4㎲ 156,250 }} }}

  • PDF

Contents Conversion System for Mobile Devices using Light-Weight Web Document (웹 문서 경량화에 의한 모바일용 콘텐츠 변환 시스템)

  • Kim Jeong-Hee;Kwon Hoon;Kwak Ho-Young
    • Journal of Internet Computing and Services
    • /
    • v.6 no.6
    • /
    • pp.13-22
    • /
    • 2005
  • This paper aims to develop a system for converting web contents to mobile contents that can be used on mobile devices. Since web contents generally consist of pop-up ad windows, a bunch of unnecessary images and useless links, it is difficult to efficiently display them on common mobile devices that have lower bandwidth and memory, as well as much smaller screen, than the online environment. It is also troublesome for mobile device users to directly access contents. Thus, there has been a great demand for a new method for extracting useful and adequate contents from web documents, and optimizing them for use on mobile phones, In the paper, a system based on WAP 2,0 and XHTML Basic, which is a content creation language adopted for WAP 2,0, has been suggested. The system is designed to convert web contents by using the conversion rules of the existing filtering method after making the size of web documents smaller. The adopted conversion rules use the XHTML Basic's module units so that modification and deletion can be carried out with ease. In addition, it has been defined in a XSL document written in XSLT to maintain the extensibility of conversion and the validity of documents, In order to allow it to efficiently work together with WAP l.X's legacy services, the system has been built in a way that can have modules, which analyze information about CC/PP profiles and mobile device headers.

  • PDF

Review on the Three-Dimensional Inversion of Magnetotelluric Date (MT 자료의 3차원 역산 개관)

  • Kim Hee Joon;Nam Myung Jin;Han Nuree;Choi Jihyang;Lee Tae Jong;Song Yoonho;Suh Jung Hee
    • Geophysics and Geophysical Exploration
    • /
    • v.7 no.3
    • /
    • pp.207-212
    • /
    • 2004
  • This article reviews recent developments in three-dimensional (3-D) magntotelluric (MT) imaging. The inversion of MT data is fundamentally ill-posed, and therefore the resultant solution is non-unique. A regularizing scheme must be involved to reduce the non-uniqueness while retaining certain a priori information in the solution. The standard approach to nonlinear inversion in geophysis has been the Gauss-Newton method, which solves a sequence of linearized inverse problems. When running to convergence, the algorithm minimizes an objective function over the space of models and in the sense produces an optimal solution of the inverse problem. The general usefulness of iterative, linearized inversion algorithms, however is greatly limited in 3-D MT applications by the requirement of computing the Jacobian(partial derivative, sensitivity) matrix of the forward problem. The difficulty may be relaxed using conjugate gradients(CG) methods. A linear CG technique is used to solve each step of Gauss-Newton iterations incompletely, while the method of nonlinear CG is applied directly to the minimization of the objective function. These CG techniques replace computation of jacobian matrix and solution of a large linear system with computations equivalent to only three forward problems per inversion iteration. Consequently, the algorithms are efficient in computational speed and memory requirement, making 3-D inversion feasible.

Implementation of a File System for Flash Memory (플래시 메모리를 위한 파일 시스템의 구현)

  • Park, Sang-Ho;Ahn, Woo-Hyun;Park, Dae-Yeon;Kim, Jeong-Ki;Park, Sung-Min
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.7 no.5
    • /
    • pp.402-415
    • /
    • 2001
  • Advantages of flash memories are their shock resistance and fast read speed, which is much faster than that of a HDD. Because of these characteristics, they are increasingly used in the traditional household electric appliance and portable handset and therefore, development of file systems which use them as storage medium is increasingly needed. But they have two problems as storage medium. First, data stored in them cannot be overwritten: it must be erased before new data can be stored. Unfortunately, this erase operation usually takes about one second. Consequently, updating data in flash memories takes long time. In this paper, their problem is solved by using a data update mechanism like LFS(Log-structured File System). Second, their erase operations are restricted. We propose novel cleaning policy in order to increase the life cycle. We implemented FAT file system, which is suitable to small storage medium and solved problems, which usually happen in implementing FAT. We evaluated the performance of sequential writes and random writes on our implemented flash file system.

  • PDF

An Improved Estimation Model of Server Power Consumption for Saving Energy in a Server Cluster Environment (서버 클러스터 환경에서 에너지 절약을 위한 향상된 서버 전력 소비 추정 모델)

  • Kim, Dong-Jun;Kwak, Hu-Keun;Kwon, Hui-Ung;Kim, Young-Jong;Chung, Kyu-Sik
    • The KIPS Transactions:PartA
    • /
    • v.19A no.3
    • /
    • pp.139-146
    • /
    • 2012
  • In the server cluster environment, one of the ways saving energy is to control server's power according to traffic conditions. This is to determine the ON/OFF state of servers according to energy usage of data center and each server. To do this, we need a way to estimate each server's energy. In this paper, we use a software-based power consumption estimation model because it is more efficient than the hardware model using power meter in terms of energy and cost. The traditional software-based power consumption estimation model has a drawback in that it doesn't know well the computing status of servers because it uses only the idle status field of CPU. Therefore it doesn't estimate consumption power effectively. In this paper, we present a CPU field based power consumption estimation model to estimate more accurate than the two traditional models (CPU/Disk/Memory utilization based power consumption estimation model and CPU idle utilization based power consumption estimation model) by using the various status fields of CPU to get the CPU status of servers and the overall status of system. We performed experiments using 2 PCs and compared the power consumption estimated by the power consumption model (software) with that measured by the power meter (hardware). The experimental results show that the traditional model has about 8-15% average error rate but our proposed model has about 2% average error rate.

Analysis and Performance Evaluation of Pattern Condensing Techniques used in Representative Pattern Mining (대표 패턴 마이닝에 활용되는 패턴 압축 기법들에 대한 분석 및 성능 평가)

  • Lee, Gang-In;Yun, Un-Il
    • Journal of Internet Computing and Services
    • /
    • v.16 no.2
    • /
    • pp.77-83
    • /
    • 2015
  • Frequent pattern mining, which is one of the major areas actively studied in data mining, is a method for extracting useful pattern information hidden from large data sets or databases. Moreover, frequent pattern mining approaches have been actively employed in a variety of application fields because the results obtained from them can allow us to analyze various, important characteristics within databases more easily and automatically. However, traditional frequent pattern mining methods, which simply extract all of the possible frequent patterns such that each of their support values is not smaller than a user-given minimum support threshold, have the following problems. First, traditional approaches have to generate a numerous number of patterns according to the features of a given database and the degree of threshold settings, and the number can also increase in geometrical progression. In addition, such works also cause waste of runtime and memory resources. Furthermore, the pattern results excessively generated from the methods also lead to troubles of pattern analysis for the mining results. In order to solve such issues of previous traditional frequent pattern mining approaches, the concept of representative pattern mining and its various related works have been proposed. In contrast to the traditional ones that find all the possible frequent patterns from databases, representative pattern mining approaches selectively extract a smaller number of patterns that represent general frequent patterns. In this paper, we describe details and characteristics of pattern condensing techniques that consider the maximality or closure property of generated frequent patterns, and conduct comparison and analysis for the techniques. Given a frequent pattern, satisfying the maximality for the pattern signifies that all of the possible super sets of the pattern must have smaller support values than a user-specific minimum support threshold; meanwhile, satisfying the closure property for the pattern means that there is no superset of which the support is equal to that of the pattern with respect to all the possible super sets. By mining maximal frequent patterns or closed frequent ones, we can achieve effective pattern compression and also perform mining operations with much smaller time and space resources. In addition, compressed patterns can be converted into the original frequent pattern forms again if necessary; especially, the closed frequent pattern notation has the ability to convert representative patterns into the original ones again without any information loss. That is, we can obtain a complete set of original frequent patterns from closed frequent ones. Although the maximal frequent pattern notation does not guarantee a complete recovery rate in the process of pattern conversion, it has an advantage that can extract a smaller number of representative patterns more quickly compared to the closed frequent pattern notation. In this paper, we show the performance results and characteristics of the aforementioned techniques in terms of pattern generation, runtime, and memory usage by conducting performance evaluation with respect to various real data sets collected from the real world. For more exact comparison, we also employ the algorithms implementing these techniques on the same platform and Implementation level.

Design of a Real-time Sensor Node Platform for Efficient Management of Periodic and Aperiodic Tasks (주기 및 비주기 태스크의 효율적인 관리를 위한 실시간 센서 노드 플랫폼의 설계)

  • Kim, Byoung-Hoon;Jung, Kyung-Hoon;Tak, Sung-Woo
    • The KIPS Transactions:PartC
    • /
    • v.14C no.4
    • /
    • pp.371-382
    • /
    • 2007
  • In this paper, we propose a real-time sensor node platform that efficiently manages periodic and aperiodic tasks. Since existing sensor node platforms available in literature focus on minimizing the usage of memory and power consumptions, they are not capable of supporting the management of tasks that need their real-time execution and fast average response time. We first analyze how to structure periodic or aperiodic task decomposition in the TinyOS-based sensor node platform as regard to guaranteeing the deadlines of ail the periodic tasks and aiming to providing aperiodic tasks with average good response time. Then we present the application and efficiency of the proposed real-time sensor node platform in the sensor node equipped with a low-power 8-bit microcontroller, an IEEE802.15.4 compliant 2.4GHz RF transceiver, and several sensors. Extensive experiments show that our sensor node platform yields efficient performance in terms of three significant, objective goals: deadline miss ratio of periodic tasks, average response time of aperiodic tasks, and processor utilization of periodic and aperiodic tasks.

Distributed Assumption-Based Truth Maintenance System for Scalable Reasoning (대용량 추론을 위한 분산환경에서의 가정기반진리관리시스템)

  • Jagvaral, Batselem;Park, Young-Tack
    • Journal of KIISE
    • /
    • v.43 no.10
    • /
    • pp.1115-1123
    • /
    • 2016
  • Assumption-based truth maintenance system (ATMS) is a tool that maintains the reasoning process of inference engine. It also supports non-monotonic reasoning based on dependency-directed backtracking. Bookkeeping all the reasoning processes allows it to quickly check and retract beliefs and efficiently provide solutions for problems with large search space. However, the amount of data has been exponentially grown recently, making it impossible to use a single machine for solving large-scale problems. The maintaining process for solving such problems can lead to high computation cost due to large memory overhead. To overcome this drawback, this paper presents an approach towards incrementally maintaining the reasoning process of inference engine on cluster using Spark. It maintains data dependencies such as assumption, label, environment and justification on a cluster of machines in parallel and efficiently updates changes in a large amount of inferred datasets. We deployed the proposed ATMS on a cluster with 5 machines, conducted OWL/RDFS reasoning over University benchmark data (LUBM) and evaluated our system in terms of its performance and functionalities such as assertion, explanation and retraction. In our experiments, the proposed system performed the operations in a reasonably short period of time for over 80GB inferred LUBM2000 dataset.

Building a Log Framework for Personalization Based on a Java Open Source (JAVA 오픈소스 기반의 개인화를 지원하는 Log Framework 구축)

  • Sin, Choongsub;Park, Seog
    • KIISE Transactions on Computing Practices
    • /
    • v.21 no.8
    • /
    • pp.524-530
    • /
    • 2015
  • A log is for text monitoring and perceiving the issues of a system during the development and operation of a program. Based on the log, system developers and operators can trace the cause of an issue. In the development phase, it is relatively simple for a log to be traced while there are only a small number of personnel uses of a system such as developers and testers. However, it is the difficult to trace a log when many people can use the system in the operation phase. In major cases, because a log cannot be tracked, even tracing is dropped. This study proposed a simplified tracing of a log during the system operation. Thus, the purpose is to create a log on the run time based on an ID/IP, using features provided by the Logback. It saves an ID/IP of the tracking user on a DB, and loads the user's ID/IP onto the memory to trace once WAS starts running. Before the online service operates, an Interceptor is executed to decide whether to load a log file, and then it generates the service requested by a certain user in a separate log file. The load is insignificant since the arithmetic operation occurs in a JVM, although every service must pass through the Interceptor to be executed.