• Title/Summary/Keyword: weighted space

Search Result 540, Processing Time 0.028 seconds

WWCLOCK: Page Replacement Algorithm Considering Asymmetric I/O Cost of Flash Memory (WWCLOCK: 플래시 메모리의 비대칭적 입출력 비용을 고려한 페이지 교체 알고리즘)

  • Park, Jun-Seok;Lee, Eun-Ji;Seo, Hyun-Min;Koh, Kern
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.15 no.12
    • /
    • pp.913-917
    • /
    • 2009
  • Flash memories have asymmetric I/O costs for read and write in terms of latency and energy consumption. However, the ratio of these costs is dependent on the type of storage. Moreover, it is becoming more common to use two flash memories on a system as an internal memory and an external memory card. For this reason, buffer cache replacement algorithms should consider I/O costs of device as well as possibility of reference. This paper presents WWCLOCK(Write-Weighted CLOCK) algorithm which directly uses I/O costs of devices along with recency and frequency of cache blocks to selecting a victim to evict from the buffer cache. WWCLOCK can be used for wide range of storage devices with different I/O cost and for systems that are using two or more memory devices at the same time. In addition to this, it has low time and space complexity comparable to CLOCK algorithm. Trace-driven simulations show that the proposed algorithm reduces the total I/O time compared with LRU by 36.2% on average.

Structural Design of FCM-based Fuzzy Inference System : A Comparative Study of WLSE and LSE (FCM기반 퍼지추론 시스템의 구조 설계: WLSE 및 LSE의 비교 연구)

  • Park, Wook-Dong;Oh, Sung-Kwun;Kim, Hyun-Ki
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.59 no.5
    • /
    • pp.981-989
    • /
    • 2010
  • In this study, we introduce a new architecture of fuzzy inference system. In the fuzzy inference system, we use Fuzzy C-Means clustering algorithm to form the premise part of the rules. The membership functions standing in the premise part of fuzzy rules do not assume any explicit functional forms, but for any input the resulting activation levels of such radial basis functions directly depend upon the distance between data points by means of the Fuzzy C-Means clustering. As the consequent part of fuzzy rules of the fuzzy inference system (being the local model representing input output relation in the corresponding sub-space), four types of polynomial are considered, namely constant, linear, quadratic and modified quadratic. This offers a significant level of design flexibility as each rule could come with a different type of the local model in its consequence. Either the Least Square Estimator (LSE) or the weighted Least Square Estimator (WLSE)-based learning is exploited to estimate the coefficients of the consequent polynomial of fuzzy rules. In fuzzy modeling, complexity and interpretability (or simplicity) as well as accuracy of the obtained model are essential design criteria. The performance of the fuzzy inference system is directly affected by some parameters such as e.g., the fuzzification coefficient used in the FCM, the number of rules(clusters) and the order of polynomial in the consequent part of the rules. Accordingly we can obtain preferred model structure through an adjustment of such parameters of the fuzzy inference system. Moreover the comparative experimental study between WLSE and LSE is analyzed according to the change of the number of clusters(rules) as well as polynomial type. The superiority of the proposed model is illustrated and also demonstrated with the use of Automobile Miles per Gallon(MPG), Boston housing called Machine Learning dataset, and Mackey-glass time series dataset.

Development of an Index Model on the Information and Communication Ethics (정보통신윤리지수 모델 개발)

  • Lee, Jae-Woon;Han, Keun-Woo;Lee, YoungJun;Kim, Seong-Sik
    • The Journal of Korean Association of Computer Education
    • /
    • v.10 no.3
    • /
    • pp.19-29
    • /
    • 2007
  • The informationization policies undertaken in concentration only on the national economic growth invited the unbalance between technology and social culture, displayed the serious reverse function phenomena, and these problems threatened the personal identity of individual in the cyber space. In this study, we developed an index model on the information and communication ethics where the national level of information and communication ethics can be examined. For this purpose, based on diverse previous research data, the areas of the ethical indicators in information and communication has been developed through the brain storming of the research staff. Through the consultation with experts, the feasibility and representation as index was heightened, and the weighted values to the each index was provided by the AHP method. The outcome of questionnaires from the information users and information providers (great portals) was also reflected in simple average method into the calculation of the weighted value to the each index element.

  • PDF

A Study on the Fabrication and the Impedance Matching of SPUDT Type SAW Filter (단상 단방향 형태의 표면탄성파 필터 제작 및 임피던스 정합)

  • You Il-Hyun
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.9 no.3
    • /
    • pp.602-608
    • /
    • 2005
  • We have studied to obtain the SAW filter for the Single Phase Unidirectional Transduce. (SPUDT), was formed on the Langasite substrate and was evaporated by Aluminum-Copper alloy and then we performed computer-simulated by simulator. We can fabricate that the block weighted type IDT as an input transducer of the filter and the withdrawal weighted type IDT as an output transducer of the filter from the results of our computer-simulations. Also, we have performed to obtain the properly conditions about impedance matching of the SAW filter for SPUDT. We have employed that the number of pairs of the input and output IDT are 50 pairs and the thickness and the width of reflector are 5000A and $3.6{\mu}m$, respectively. And the width of IDT' fingers is $2.4{\mu}m$, and the space between IDT' finger and reflector is $2.0{\mu}m$. Frequency response of the fabricated SAW filter has the property that the center frequency is about 190MHz and bandwidth at the 3dB is probably 7.7MHz after when we have matched impedance. Also, we could obtain that ripple characteristics is less than 0.4dB and standing wave ratio is probably 1.5 after when we have matched impedance.

Comparison of Composite Methods of Satellite Chlorophyll-a Concentration Data in the East Sea

  • Park, Kyung-Ae;Park, Ji-Eun;Lee, Min-Sun;Kang, Chang-Keun
    • Korean Journal of Remote Sensing
    • /
    • v.28 no.6
    • /
    • pp.635-651
    • /
    • 2012
  • To produce a level-3 monthly composite image from daily level-2 Sea-viewing Wide Field-of-view Sensor (SeaWiFS) chlorophyll-a concentration data set in the East Sea, we applied four average methods such as the simple average method, the geometric mean method, the maximum likelihood average method, and the weighted averaging method. Prior to performing each averaging method, we classified all pixels into normal pixels and abnormal speckles with anomalously high chlorophyll-a concentrations to eliminate speckles from the following procedure for composite methods. As a result, all composite maps did not contain the erratic effect of speckles. The geometric mean method tended to underestimate chlorophyll-a concentration values all the time as compared with other methods. The weighted averaging method was quite similar to the simple average method, however, it had a tendency to be overestimated at high-value range of chlorophyll-a concentration. Maximum likelihood method was almost similar to the simple average method by demonstrating small variance and high correlation (r=0.9962) of the differences between the two. However, it still had the disadvantage that it was very sensitive in the presence of speckles within a bin. The geometric mean was most significantly deviated from the remaining methods regardless of the magnitude of chlorophyll-a concentration values. Its bias error tended to be large when the standard deviation within a bin increased with less uniformity. It was more biased when data uniformity became small. All the methods exhibited large errors as chlorophyll-a concentration values dominantly scatter in terms of time and space. This study emphasizes the importance of the speckle removal process and proper selection of average methods to reduce composite errors for diverse scientific applications of satellite-derived chlorophyll-a concentration data.

The Finite Element Formulation and Its Classification of Dynamic Thermoelastic Problems of Solids (구조동역학-열탄성학 연성문제의 유한요소 정식화 및 분류)

  • Yun, Seong-Ho
    • Journal of the Computational Structural Engineering Institute of Korea
    • /
    • v.13 no.1
    • /
    • pp.37-49
    • /
    • 2000
  • This paper is for the first essential study on the development of unified finite element formulations for solving problems related to the dynamics/thermoelastics behavior of solids. In the first part of formulations, the finite element method is based on the introduction of a new quantity defined as heat displacement, which allows the heat conduction equations to be written in a form equivalent to the equation of motion, and the equations of coupled thermoelasticity to be written in a unified form. The equations obtained are used to express a variational formulation which, together with the concept of generalized coordinates, yields a set of differential equations with the time as an independent variable. Using the Laplace transform, the resulting finite element equations are described in the transform domain. In the second, the Laplace transform is applied to both the equation of heat conduction derived in the first part and the equations of motions and their corresponding boundary conditions, which is referred to the transformed equation. Selections of interpolation functions dependent on only the space variable and an application of the weighted residual method to the coupled equation result in the necessary finite element matrices in the transformed domain. Finally, to prove the validity of two approaches, a comparison with one finite element equation and the other is made term by term.

  • PDF

A Comparative Analysis of Areal Interpolation Methods for Representing Spatial Distribution of Population Subgroups (하위인구집단의 분포 재현을 위한 에어리얼 인터폴레이션의 비교 분석)

  • Cho, Daeheon
    • Spatial Information Research
    • /
    • v.22 no.3
    • /
    • pp.35-46
    • /
    • 2014
  • Population data are usually provided at administrative spatial units in Korea, so areal interpolation is needed for fine-grained analysis. This study aims to compare various methods of areal interpolation for population subgroups rather than the total population. We estimated the number of elderly people and single-person households for small areal units from Dong data by the different interpolation methods using 2010 census data of Seoul, and compared the estimates to actual values. As a result, the performance of areal interpolation methods varied between the total population and subgroup populations as well as between different population subgroups. It turned out that the method using GWR (geographically weighted regression) and building type data outperformed other methods for the total population and households. However, the OLS regression method using building type data performed better for the elderly population, and the OLS regression method based on land use data was the most effective for single-person households. Based on these results, spatial distribution of the single elderly was represented at small areal units, and we believe that this approach can contribute to effective implementation of urban policies.

High-Efficiency Design of a Ventilation Axial-Flow Fan by Using Weighted Average Surrogate Models (가중평균대리모델을 이용한 환기용 축류송풍기의 고효율 최적설계)

  • Kim, Jae-Woo;Kim, Jin-Hyuk;Lee, Chan;Kim, Kwang-Yong
    • Transactions of the Korean Society of Mechanical Engineers B
    • /
    • v.35 no.8
    • /
    • pp.763-771
    • /
    • 2011
  • An optimization procedure for the design of a ventilation axial-flow fan is presented in this paper. Flow analyses of the preliminary fan are performed by solving three-dimensional Reynolds-averaged Navier-Stokes equations via a finite-volume solver with the shear-stress transport turbulence model as a turbulence closure. Three variables, the hub-to-tip ratio and the stagger angles at the mid and tip spans, are selected for the optimization. The Latin-hypercube sampling method as a design-of-experiments technique is used to generate twenty-five design points within the design space. and the weighted average surrogate models, WTA1, WTA2, and WTA3, are applied for find optimal designs. The results show that the efficiency is considerably enhanced.

A Study of Temporary Positioning Scheme with IoT devices for Disastrous Situations in Indoor Spaces Without Permanent Network Infrastructure (상설 네트워크 인프라가 없는 실내 공간에서 재난시 IoT 기기를 활용한 부착형 실내 위치 추적 기술 연구)

  • Lee, Jeongpyo;Yun, Younguk;Kim, Sangsoo;Kim, Youngok
    • Journal of the Society of Disaster Information
    • /
    • v.14 no.3
    • /
    • pp.315-324
    • /
    • 2018
  • Purpose: This paper propose a temporary indoor positioning scheme with devices of internet of things (IoT) for disastrous situations in places without the infrastructure of networks. Method: The proposed scheme is based on the weighted centroid localization scheme that can estimate the position of a target with simple computation. Results: It also is implemented with the IoT devices at the underground parking lot, where the network is not installed, of general office building. According to the experiment results, the positioning error was around 10m without a priori calibration process at $82.5m{\times}56.4m$ underground space. Conclusion: The proposed scheme can be deployed many places without the infrastructure of networks, such as parking lots, warehouses, factory, etc.

CHARMS: A Mapping Heuristic to Explore an Optimal Partitioning in HW/SW Co-Design (CHARMS: 하드웨어-소프트웨어 통합설계의 최적 분할 탐색을 위한 매핑 휴리스틱)

  • Adeluyi, Olufemi;Lee, Jeong-A
    • Journal of the Korea Society of Computer and Information
    • /
    • v.15 no.9
    • /
    • pp.1-8
    • /
    • 2010
  • The key challenge in HW/SW co-design is how to choose the appropriate HW/SW partitioning from the vast array of possible options in the mapping set. In this paper we present a unique and efficient approach for addressing this problem known as Customized Heuristic Algorithm for Reducing Mapping Sets(CHARMS). CHARMS uses sensitivity to individual task computational complexity as well the computed weighted values of system performance influencing metrics to streamline the mapping sets and extract the most optimal cases. Using H.263 encoder, we show that CHARMS sieves out 95.17% of the sub-optimal mapping sets, leaving the designer with 4.83% of the best cases to select from for run-time implementation.