• Title/Summary/Keyword: Process Data

Search Result 23,765, Processing Time 0.042 seconds

Variable latency L1 data cache architecture design in multi-core processor under process variation

  • Kong, Joonho
    • Journal of the Korea Society of Computer and Information
    • /
    • v.20 no.9
    • /
    • pp.1-10
    • /
    • 2015
  • In this paper, we propose a new variable latency L1 data cache architecture for multi-core processors. Our proposed architecture extends the traditional variable latency cache to be geared toward the multi-core processors. We added a specialized data structure for recording the latency of the L1 data cache. Depending on the added latency to the L1 data cache, the value stored to the data structure is determined. It also tracks the remaining cycles of the L1 data cache which notifies data arrival to the reservation station in the core. As in the variable latency cache of the single-core architecture, our proposed architecture flexibly extends the cache access cycles considering process variation. The proposed cache architecture can reduce yield losses incurred by L1 cache access time failures to nearly 0%. Moreover, we quantitatively evaluate performance, power, energy consumption, power-delay product, and energy-delay product when increasing the number of cache access cycles.

The wavelet based Kalman filter method for the estimation of time-series data (시계열 데이터의 추정을 위한 웨이블릿 칼만 필터 기법)

  • Hong, Chan-Young;Yoon, Tae-Sung;Park, Jin-Bae
    • Proceedings of the KIEE Conference
    • /
    • 2003.11c
    • /
    • pp.449-451
    • /
    • 2003
  • The estimation of time-series data is fundamental process in many data analysis cases. However, the unwanted measurement error is usually added to true data, so that the exact estimation depends on efficient method to eliminate the error components. The wavelet transform method nowadays is expected to improve the accuracy of estimation, because it is able to decompose and analyze the data in various resolutions. Therefore, the wavelet based Kalman filter method for the estimation of time-series data is proposed in this paper. The wavelet transform separates the data in accordance with frequency bandwidth, and the detail wavelet coefficient reflects the stochastic process of error components. This property makes it possible to obtain the covariance of measurement error. We attempt the estimation of true data through recursive Kalman filtering algorithm with the obtained covariance value. The procedure is verified with the fundamental example of Brownian walk process.

  • PDF

Synthesis of Human Body Shape for Given Body Sizes using 3D Body Scan Data (3차원 스캔 데이터를 이용하여 임의의 신체 치수에 대응하는 인체 형상 모델 생성 방법)

  • Jang, Tae-Ho;Baek, Seung-Yeob;Lee, Kun-Woo
    • Korean Journal of Computational Design and Engineering
    • /
    • v.14 no.6
    • /
    • pp.364-373
    • /
    • 2009
  • In this paper, we suggest the method for constructing parameterized human body model which has any required body sizes from 3D scan data. Because of well developed 3D scan technology, we can get more detailed human body model data which allow to generate precise human model. In this field, there are a lot of research is performed with 3D scan data. But previous researches have some limitations to make human body model. They need too much time to perform hole-filling process or calculate parameterization of model. Even more they missed out verification process. To solve these problems, we used several methods. We first choose proper 125 3D scan data from 5th Korean body size survey of Size Korea according to age, height and weight. We also did post process, feature point setting, RBF interpolation and align, to parameterize human model. Then principal component analysis is adapted to the result of post processed data to obtain dominant shape parameters. These steps allow to reduce process time without loss of accuracy. Finally, we compare these results and statistical data of Size Korea to verify our parameterized human model.

Simulation of the dihydrate process for the production of phosphoric acid

  • Yeo, Y.K.;Cho, Y.S.;Moon, B.K.;Kim, Y.H.
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1988.10b
    • /
    • pp.875-878
    • /
    • 1988
  • In this work it is shown how the methods used in chemical engineering for the analysis and simulation of processes may be applied to the actual phosphoric acid plant. Attention has been focused on the dihydrate process for which the necessary fundamental experimental data and plant operation data are available. The results of the simulation have shown that a reasonable description of the process at hand is possible by the proposed method. However, because of the complexity of the process, of the limited basic experimental data reported in literature, and the limitations of mathematics, the model was somewhat idealized and gave a reliable representation of the influence of only a few of the variables that affect the performance of the plant.

  • PDF

Improvement of the Parallel Importation Logistics Process Using Big Data

  • Park, Doo-Jin;Kim, Woo-Sun
    • Journal of information and communication convergence engineering
    • /
    • v.17 no.4
    • /
    • pp.267-273
    • /
    • 2019
  • South Korea has allowed parallel importation since 1995. Parallel importation causes competition among importers in the logistics process allowing, consumers to purchase foreign brand products at low prices. Most parallel importers base product pricing on subjective judgements. Fashion products in particular, have different sales rates depending on trends and seasons, so sales performance varies greatly depending on selling price timing and policy. The merchandiser (MD) set the price on parallel importation products by aggregating information on imported products and pricing goods. However, this customized process is very time consuming for the MD. This is because the logistics process of parallel importation's customs clearance procedures and repair works is complicated and takes a significant amount of time. In this paper, we propose an improved parallel importation logistics process based on big data, which automatically sets the price of parallel importation products.

A study on the forming process and formability improvement of clutch gear for vehicle transmission (자동차 트랜스미션용 클러치 기어의 성형 공법 및 성형성 향상에 관한 연구)

  • Lee K. O.;Kang S. S.;Kim J. M.
    • Proceedings of the Korean Society for Technology of Plasticity Conference
    • /
    • 2005.05a
    • /
    • pp.184-187
    • /
    • 2005
  • Forging process is one of the forming process and is used widely in automobile parts and manufacture industry. Especially the gears like spur gear, helical gear, bevel gear were produced by machine tool, but recently they have been manufactured by forging process. The goal of this study is to study forming process with data obtained by comparison between forward extrusion and upsetting simulation results and formability improvement by various heat treatment conditions. By analysis data of 3D FEM by upsetting and forward extrusion forming, the forming process of clutch gear develops using data based on 3D FEM analysis. Through tensile test using specimens by various heat treatment conditions, the optimal heat treatment condition is obtained by comparison the results of tensile test.

  • PDF

A study on the damage process of fatigue crack growth using the stochastic model (확률적모델을 이용한 피로균열성장의 손상과정에 관한 연구)

  • Lee, Won Suk;Cho, Kyu Seoung;Lee, Hyun Woo
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.13 no.10
    • /
    • pp.130-138
    • /
    • 1996
  • In general, the scattler is observed in fatigue test data due to the nonhomogeneity of a material. Consequently. It is necessary to use the statistical method to describe the fatigue crack growth process precisely. Bogdanoff and Kozin suggested and developed the B-model which is the probabilistic models of cumulative damage using the Markov process in order to describe the damage process. But the B-model uses only constant probability ratior(r), so it is not consistent with the actual damage process. In this study, the r-decreasing model using a monotonic decreasing function is introduced to improve the B-model. To verify the model, thest data of fatigue crack growth of A12024-T351 and A17075-T651 are used. Compared with the empirical distribution of test data, the distribution from the r-decreasing model is satisfactory and damage process is well described from the probabilistic and physical viewpoint.

  • PDF

Development of Diagnostic Expert System for Machining Process Ffailure Detection (가공공정의 이상상태진단을 위한 진단전문가시스템의 개발)

  • Yoo, Song-Min;Kim, Young-Jin
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.14 no.11
    • /
    • pp.147-153
    • /
    • 1997
  • Fault diagnosis technique in machining system which is one of engineering techniques absolutely necessary to automation of manufacturing system has been proposed. As a whole, diagnosis process is explained by two steps: sensor data acquisition and reasoning current state of system with the given sensor data. Flexible disk grinding process implemented in milling machine was employed in order to obtain empirical manufacturing process information. Resistance force data during machining were acquired using tool dynamometer known as sensor which is comparably accurate and reliable in operation. Tool status during the process was analyzed using influnece diagram assigning probability from the statistical analysis procedure.

  • PDF

The Design of XMDR Data Hub for Efficient Business Process Operation (효율적인 비즈니스 프로세스 운용을 위한 XMDR 데이터 허브 설계)

  • Hwang, Chi-Gon;Jung, Gye-Dong;Choi, Young-Keun
    • The KIPS Transactions:PartD
    • /
    • v.18D no.3
    • /
    • pp.149-156
    • /
    • 2011
  • Recently, enterprise systems require the necessity of integration for data sharing and cooperation. As a methodology for integration, Service-Oriented Architecture for service integration and Master Data for integration of data, which is used for service, were appeared. This paper suggests a method that operates BP(Business Process) efficiently. We make XMDR(eXtended Meta Data Registry) as knowledge-repository to support the BP and construct data hubs to operate it. XMDR manages MDM(Master Data Management) to integrate the data, resolves heterogeneity between the data and provides relationship to the business efficiently. This is composed of MDR(Meta Data Registry), ontology and BR(Business Relations). MDR describes relationship between meta data to solve structured heterogeneity. Ontology describes semantic heterogeneity and relationship between data. BR describes relationship between tasks. XMDR data hub supports the management of master data and interaction of different process effectively.

A Study on Effective Utilization of Historical Data of Software Companies (소프트웨어사업자 실적데이터 활용방안에 관한 연구)

  • Kim, Joong-Han
    • Journal of Information Technology Services
    • /
    • v.7 no.1
    • /
    • pp.103-116
    • /
    • 2008
  • Efficiency and objectiveness are the most critical issues in the evaluation of software projects. It is beneficial not only to software companies participated in bids, but to administrators of projects. This study attempts to find an improvement in the evaluation process by connecting the historical data of bidding companies in the software company report system with the governmental procurement system. The proposed means will eliminate unnecessary and repetitive submission step of bidding companies and provide the administrator with objective evaluation process. Also, this paper proposes an automated process for quantifying the business experience of bidding companies.