• Title/Summary/Keyword: data-based model

Search Result 21,105, Processing Time 0.046 seconds

SOC Verification Based on WGL

  • Du, Zhen-Jun;Li, Min
    • Journal of Korea Multimedia Society
    • /
    • v.9 no.12
    • /
    • pp.1607-1616
    • /
    • 2006
  • The growing market of multimedia and digital signal processing requires significant data-path portions of SoCs. However, the common models for verification are not suitable for SoCs. A novel model--WGL (Weighted Generalized List) is proposed, which is based on the general-list decomposition of polynomials, with three different weights and manipulation rules introduced to effect node sharing and the canonicity. Timing parameters and operations on them are also considered. Examples show the word-level WGL is the only model to linearly represent the common word-level functions and the bit-level WGL is especially suitable for arithmetic intensive circuits. The model is proved to be a uniform and efficient model for both bit-level and word-level functions. Then Based on the WGL model, a backward-construction logic-verification approach is presented, which reduces time and space complexity for multipliers to polynomial complexity(time complexity is less than $O(n^{3.6})$ and space complexity is less than $O(n^{1.5})$) without hierarchical partitioning. Finally, a construction methodology of word-level polynomials is also presented in order to implement complex high-level verification, which combines order computation and coefficient solving, and adopts an efficient backward approach. The construction complexity is much less than the existing ones, e.g. the construction time for multipliers grows at the power of less than 1.6 in the size of the input word without increasing the maximal space required. The WGL model and the verification methods based on WGL show their theoretical and applicable significance in SoC design.

  • PDF

Model Prediction of Nutrient Supply to Ruminants from Processed Field Tick Beans

  • Yu, P.;Christensen, D.A.
    • Asian-Australasian Journal of Animal Sciences
    • /
    • v.17 no.12
    • /
    • pp.1674-1680
    • /
    • 2004
  • The objective of this study was to compare the Dutch DVE/OEB system and the NRC-2001 model in the prediction of supply of protein to dairy cows from processed field tick beans. Comparisons were made in terms of 1) ruminally synthesized microbial CP, 2) truly absorbed protein in the small intestine, and 3) degraded protein balance. The results showed that the predicted values from the DVE/OEB system and the NRC-2001 model had significant correlations with high R (>0.90) values. However, using the DVE/OEB system, the overall average microbial protein supply based on available energy was 16% higher and the truly absorbed protein in the small intestine was 9% higher than that predicted by the NRC-2001 model. The difference was also found in the prediction of the degraded protein balances (DPB), which was 5% lower than that predicted based on data from the NRC-2001 model. These differences are due to considerably different factors used in calculations in the two models, although both are based on similar principles. It need to mention that this comparison was based on the limited data, the full comparison involving various types of concentrate feeds will be investigated in the future.

Research on Forecasting Framework for System Marginal Price based on Deep Recurrent Neural Networks and Statistical Analysis Models

  • Kim, Taehyun;Lee, Yoonjae;Hwangbo, Soonho
    • Clean Technology
    • /
    • v.28 no.2
    • /
    • pp.138-146
    • /
    • 2022
  • Electricity has become a factor that dramatically affects the market economy. The day-ahead system marginal price determines electricity prices, and system marginal price forecasting is critical in maintaining energy management systems. There have been several studies using mathematics and machine learning models to forecast the system marginal price, but few studies have been conducted to develop, compare, and analyze various machine learning and deep learning models based on a data-driven framework. Therefore, in this study, different machine learning algorithms (i.e., autoregressive-based models such as the autoregressive integrated moving average model) and deep learning networks (i.e., recurrent neural network-based models such as the long short-term memory and gated recurrent unit model) are considered and integrated evaluation metrics including a forecasting test and information criteria are proposed to discern the optimal forecasting model. A case study of South Korea using long-term time-series system marginal price data from 2016 to 2021 was applied to the developed framework. The results of the study indicate that the autoregressive integrated moving average model (R-squared score: 0.97) and the gated recurrent unit model (R-squared score: 0.94) are appropriate for system marginal price forecasting. This study is expected to contribute significantly to energy management systems and the suggested framework can be explicitly applied for renewable energy networks.

Grid Based Nonpoint Source Pollution Load Modelling

  • Niaraki, Abolghasem Sadeghi;Park, Jae-Min;Kim, Kye-Hyun;Lee, Chul-Yong
    • 한국공간정보시스템학회:학술대회논문집
    • /
    • 2007.06a
    • /
    • pp.246-251
    • /
    • 2007
  • The purpose of this study is to develop a grid based model for calculating the critical nonpoint source (NPS) pollution load (BOD, TN, TP) in Nak-dong area in South Korea. In the last two decades, NPS pollution has become a topic for research that resulted in the development of numerous modeling techniques. Watershed researchers need to be able to emphasis on the characterization of water quality, including NPS pollution loads estimates. Geographic Information System (GIS) has been designed for the assessment of NPS pollution in a watershed. It uses different data such as DEM, precipitation, stream network, discharge, and land use data sets and utilizes a grid representation of a watershed for the approximation of average annual pollution loads and concentrations. The difficulty in traditional NPS modeling is the problem of identifying sources and quantifying the loads. This research is intended to investigate the correlation of NPS pollution concentrations with land uses in a watershed by calculating Expected Mean Concentrations (EMC). This work was accomplished using a grid based modelling technique that encompasses three stages. The first step includes estimating runoff grid by means of the precipitation grid and runoff coefficient. The second step is deriving the gird based model for calculating NPS pollution loads. The last step is validating the gird based model with traditional pollution loads calculation by applying statistical t-test method. The results on real data, illustrate the merits of the grid based modelling approach. Therefore, this model investigates a method of estimating and simulating point loads along with the spatially distributed NPS pollution loads. The pollutant concentration from local runoff is supposed to be directly related to land use in the region and is not considered to vary from event to event or within areas of similar land uses. By consideration of this point, it is anticipated that a single mean estimated pollutant concentration is assigned to all land uses rather than taking into account unique concentrations for different soil types, crops, and so on.

  • PDF

Vertical Structure of the Coastal Atmospheric Boundary Layer Based on Terra/MODIS Data (Terra/MODIS 자료를 이용한 연안 대기경계층의 연직구조)

  • Kim, Dong Su;Kwon, Byung Hyuk
    • Atmosphere
    • /
    • v.17 no.3
    • /
    • pp.281-289
    • /
    • 2007
  • Micrometeorlogical and upper air observation have been conducted in order to determine the atmospheric boundary layer depth based on data from satellite and automatic weather systems. Terra/MODIS temperature profiles and sensible heat fluxes from the gradient method were used to estimate the mixed layer height over a coastal region. Results of the integral model were in good agreement with the mixed layer height observed using GPS radiosonde at Wolsung ($35.72^{\circ}N$, $129.48^{\circ}E$). Since the variation of the mixed layer height depends on the surface sensible heat flux, the integral model estimated properly the mixed layer height in the daytime. The buoyant heat flux, which is more important than the sensible heat flux in the coastal region, must be taken into consideration to improve the integral model. The vertical structure of atmospheric boundary layer can be analyzed only with the routine data and the satellite data.

Precise Distribution Simulation of Scattered Submunitions Based on Flight Test Data

  • Yun, Sangyong;Hwang, Junsik;Suk, Jinyoung
    • International Journal of Aeronautical and Space Sciences
    • /
    • v.18 no.1
    • /
    • pp.108-117
    • /
    • 2017
  • This paper presents a distribution simulation model for dual purpose improved conventional munitions based on flight test data. A systematic procedure for designing a dispersion simulation model is proposed. A new accumulated broken line graph was suggested for designing the distribution shape. In the process of verification and simulation for the distribution simulation model, verification was performed by first comparing data with firing test results, and an application simulation was then conducted. The Monte Carlo method was used in the simulations, which reflected the relationship between ejection conditions and real distribution data. Before establishing the simulation algorithm, the dominant ejection parameter of the submunitions was examined. The relationships between ejection conditions and distribution results were investigated. Five key distribution parameters were analyzed with respect to the ejection conditions. They reflect the characteristics of clustered particle dynamics and aerodynamics.

Design and Implementation of A Video Information Management System for Digital Libraries (디지털 도서관을 위한 동영상 정보 관리 시스템의 설계 및 구현)

  • 김현주;권재길;정재희;김인홍;강현석;배종민
    • Journal of Korea Multimedia Society
    • /
    • v.1 no.2
    • /
    • pp.131-141
    • /
    • 1998
  • Video data occurred in multimedia documents consist of a large scale of irregular data including audio-visual, spatial-temporal, and semantic information. In general, it is difficult to grasp the exact meaning of such a video information because video data apparently consist of unmeaningful symbols and numerics. In order to relieve these difficulties, it is necessary to develop an integrated manager for complex structures of video data and provide users of video digital libraries with easy, systematic access mechanisms to video informations. This paper proposes a generic integrated video information model(GIVIM) based on an extended Dublin Core metadata system to effectively store and retrieve video documents in digital libraries. The GIVIM is an integrated mo이 of a video metadata model(VMN) and a video architecture information model(VAIM). We also present design and implementation results of a video document management system(VDMS) based on the GIVIM.

  • PDF

Data Partitioning for Error Resilience and Incremental Rendering of 3D Model (삼차원 모델의 점진적인 렌더링과 오류 강인을 위한 효율적인 데이터 분할 방법 (CODAP))

  • 송문섭;안정환;김성진;한만진;호요성
    • Proceedings of the IEEK Conference
    • /
    • 1999.11a
    • /
    • pp.1089-1092
    • /
    • 1999
  • Applications using 3D models are increasing recently. Since 3D polygonal models are structured by a triangular mesh, the coding of polygonal models in strips of triangles is an efficient way of representing the data. These strips may be very long, and may take a long time to render or transmit. If the triangle strips are partitioned, it may be possible to perform more efficient data transmission in an error-prone environment and to display the 3D model progressively. In this paper, we devised the Component Based Data Partitioning (CODAP) which is based on Topological Surgery (TS). In order to support the error resilience and the progressively build-up rendering, we partition the connectivity, geometry, and properties of a 3D polygonal model. Each partitioned component is independently encoded and resynchronization between partitioned components is done.

  • PDF

Likelihood-Based Inference on Genetic Variance Component with a Hierarchical Poisson Generalized Linear Mixed Model

  • Lee, C.
    • Asian-Australasian Journal of Animal Sciences
    • /
    • v.13 no.8
    • /
    • pp.1035-1039
    • /
    • 2000
  • This study developed a Poisson generalized linear mixed model and a procedure to estimate genetic parameters for count traits. The method derived from a frequentist perspective was based on hierarchical likelihood, and the maximum adjusted profile hierarchical likelihood was employed to estimate dispersion parameters of genetic random effects. Current approach is a generalization of Henderson's method to non-normal data, and was applied to simulated data. Underestimation was observed in the genetic variance component estimates for the data simulated with large heritability by using the Poisson generalized linear mixed model and the corresponding maximum adjusted profile hierarchical likelihood. However, the current method fitted the data generated with small heritability better than those generated with large heritability.

Intrusion Detection Scheme Using Traffic Prediction for Wireless Industrial Networks

  • Wei, Min;Kim, Kee-Cheon
    • Journal of Communications and Networks
    • /
    • v.14 no.3
    • /
    • pp.310-318
    • /
    • 2012
  • Detecting intrusion attacks accurately and rapidly in wireless networks is one of the most challenging security problems. Intrusion attacks of various types can be detected by the change in traffic flow that they induce. Wireless industrial networks based on the wireless networks for industrial automation-process automation (WIA-PA) standard use a superframe to schedule network communications. We propose an intrusion detection system for WIA-PA networks. After modeling and analyzing traffic flow data by time-sequence techniques, we propose a data traffic prediction model based on autoregressive moving average (ARMA) using the time series data. The model can quickly and precisely predict network traffic. We initialized the model with data traffic measurements taken by a 16-channel analyzer. Test results show that our scheme can effectively detect intrusion attacks, improve the overall network performance, and prolong the network lifetime.