• Title/Summary/Keyword: Associative memories

Search Result 35, Processing Time 0.026 seconds

FAM APPROACH TO DESIGN A FUZZY CONTROLLER

  • Lo Presti, M.;Poluzzi, R.;Rizzotto, G.G.;Zanaboni, A.
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 1993.06a
    • /
    • pp.1033-1036
    • /
    • 1993
  • Most of the today realized fuzzy logic control applications has been designed using different heuristic approaches for synthesis and implemented with conventional programming languages on general purpose microcontrollers. This paper aims to present a new methodology to design a fuzzy controller. The methodology is based on the Cell-to-Cell approach to extract the control law. A set of fuzzy rules is then found by using a FAM (Fuzzy associative memories) approach. The proposed procedure was implemented to control the rotor position of a DC motor.

  • PDF

Peducing the Overhead of Virtual Address Translation Process (가상주소 변환 과정에 대한 부담의 줄임)

  • U, Jong-Jeong
    • The Transactions of the Korea Information Processing Society
    • /
    • v.3 no.1
    • /
    • pp.118-126
    • /
    • 1996
  • Memory hierarchy is a useful mechanism for improving the memory access speed and making the program space larger by layering the memories and separating program spaces from memory spaces. However, it needs at least two memory accesses for each data reference : a TLB(Translation Lookaside Buffer) access for the address translation and a data cache access for the desired data. If the cache size increases to the multiplication of page size and the cache associativity, it is difficult to access the TLB with the cache in parallel, thereby making longer the critical timing path in the processor. To achieve such parallel accesses, we present the hybrid mapped TLB which combines a direct mapped TLB with a very small fully-associative mapped TLB. The former can reduce the TLB access time. while the latter removes the conflict misses from the former. The trace-driven simulation shows that under given workloads the proposed TLB is effective even when a fully-associative mapped TLB with only four entries is added because the effects of its increased misses are offset by its speed benefits.

  • PDF

A Study on the Effect of Corporate Association of the Hypermarket on Relationship Quality and Customer Loyalty

  • Youn-Chul JANG;Min-Jung KANG
    • Journal of Distribution Science
    • /
    • v.22 no.2
    • /
    • pp.115-123
    • /
    • 2024
  • Purpose: Using the association concept as a basis, businesses offer association cues-trademarks and logos, for example-to support consumers' associative memories. These stimuli can be connected to anything, including a product's unique personality or the advantages it offers the company that made it. The purpose of this study is to comprehend how hypermarkets' business affiliation, relationship commitment, and trust affect consumers' attitudes and behaviors. Data, methodology, and research design: Regression analysis was used in this study to confirm the relationship between the independent and dependent variables, as well as to forecast how the changes in the independent variable would affect the changes in the dependent variable. Results: These are the findings of the research. First, it was discovered that trust and relationship commitment were significantly impacted by the hypermarket product association, corporate management-related associations, and social responsibility associations. Second, it was discovered that both behavioral and attitudinal loyalty were impacted by hypermarkets' level of trust. Third, it was discovered that both behavioral and attitudinal loyalty were impacted by a hypermarket's relationship commitment. Conclusions: Corporate associations with the hypermarket play an important role in shaping and maintaining consumers' awareness of the company or brand. Since this is affected by various factors such quality of products and services, and corporate social activities, companies need to positively induce awareness of products or services.

Performance Analysis of Flash Translation Layer Algorithms for Windows-based Flash Memory Storage Device (윈도우즈 기반 플래시 메모리의 플래시 변환 계층 알고리즘 성능 분석)

  • Park, Won-Joo;Park, Sung-Hwan;Park, Sang-Won
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.13 no.4
    • /
    • pp.213-225
    • /
    • 2007
  • Flash memory is widely used as a storage device for potable equipments such as digital cameras, MP3 players and cellular phones because of its characteristics such as its large volume and nonvolatile feature, low power consumption, and good performance. However, a block in flash memories should be erased to write because of its hardware characteristic which is called as erase-before-write architecture. The erase operation is much slower than read or write operations. FTL is used to overcome this problem. We compared the performance of the existing FTL algorithms on Windows-based OS. We have developed a tool called FTL APAT in order to gather I/O patterns of the disk and analyze the performance of the FTL algorithms. It is the log buffer scheme with full associative sector translation(FAST) that the performance is best.

Energy-Performance Efficient 2-Level Data Cache Architecture for Embedded System (내장형 시스템을 위한 에너지-성능 측면에서 효율적인 2-레벨 데이터 캐쉬 구조의 설계)

  • Lee, Jong-Min;Kim, Soon-Tae
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.37 no.5
    • /
    • pp.292-303
    • /
    • 2010
  • On-chip cache memories play an important role in both performance and energy consumption points of view in resource-constrained embedded systems by filtering many off-chip memory accesses. We propose a 2-level data cache architecture with a low energy-delay product tailored for the embedded systems. The L1 data cache is small and direct-mapped, and employs a write-through policy. In contrast, the L2 data cache is set-associative and adopts a write-back policy. Consequently, the L1 data cache is accessed in one cycle and is able to provide high cache bandwidth while the L2 data cache is effective in reducing global miss rate. To reduce the penalty of high miss rate caused by the small L1 cache and power consumption of address generation, we propose an ECP(Early Cache hit Predictor) scheme. The ECP predicts if the L1 cache has the requested data using both fast address generation and L1 cache hit prediction. To reduce high energy cost of accessing the L2 data cache due to heavy write-through traffic from the write buffer laid between the two cache levels, we propose a one-way write scheme. From our simulation-based experiments using a cycle-accurate simulator and embedded benchmarks, the proposed 2-level data cache architecture shows average 3.6% and 50% improvements in overall system performance and the data cache energy consumption.