• Title/Summary/Keyword: low computation

Search Result 812, Processing Time 0.029 seconds

Two-dimensional DCT arcitecture for imprecise computation model (중간 결과값 연산 모델을 위한 2차원 DCT 구조)

  • 임강빈;정진군;신준호;최경희;정기현
    • Journal of the Korean Institute of Telematics and Electronics C
    • /
    • v.34C no.9
    • /
    • pp.22-32
    • /
    • 1997
  • This paper proposes an imprecise compuitation model for DCT considering QOS of images and a two dimensional DCT architecture for imprecise computations. In case that many processes are scheduling in a hard real time system, the system resources are shared among them. Thus all processes can not be allocated enough system resources (such as processing power and communication bandwidth). The imprecise computtion model can be used to provide scheduling flexibility and various QOS(quality of service)levels, to enhance fault tolerance, and to ensure service continuity in rela time systems. The DCT(discrete cosine transform) is known as one of popular image data compression techniques and adopted in JPEG and MPEG algorithms since the DCT can remove the spatial redundancy of 2-D image data efficiently. Even though many commercial data compression VLSI chips include the DCST hardware, the DCT computation is still a very time-consuming process and a lot of hardware resources are required for the DCT implementation. In this paper the DCT procedure is re-analyzed to fit to imprecise computation model. The test image is simulated on teh base of this model, and the computation time and the quality of restored image are studied. The row-column algorithm is used ot fit the proposed imprecise computation DCT which supports pipeline operatiions by pixel unit, various QOS levels and low speed stroage devices. The architecture has reduced I/O bandwidth which could make its implementation feasible in VLSI. The architecture is proved using a VHDL simulator in architecture level.

  • PDF

Site-Specific Ground Motions based on Empirical Green`s Function modified for the Path Effects in Layered Media (층상구조에서 지진파 전파경로를 고려하여 수정된 경험 Green 함수를 이용한 지반운동 모사)

  • 조남대;박창업
    • Proceedings of the Earthquake Engineering Society of Korea Conference
    • /
    • 2001.09a
    • /
    • pp.19-27
    • /
    • 2001
  • Seismic parameters fur computation of ground motions in Southern Korea are obtained from recently recorded data, and site-independent regional and site-dependent local strong ground motions are predicted using efficient computational techniques. For the computation of ground motions, we devised an efficient procedure to compute site-independent $x_{q}$ and dependent $x_{s}$ values separately. The first step of this procedure is to use the coda normalization method far computation of site independent Q or corresponding $x_{q}$ value. The next step is the computation of $x_{s}$, values fur each site separately using the given $x_{q}$ value. For computation of ground motions the empirical Green's function (EGF) is modified to account fur the depth and distance variations of subevents on a finite fault plane using the theoritical Green's function. It is computed using wavenumber integration technique in layered media. The site dependent ground motions at seismic stations in southeastern local area were properly simulated using the modified empirical Green's function method in layered medium. The proposed method and procedures fur estimation of site dependent seismic parameters and ground motions could be efficiently used in the low and moderate seismicity regions.ons.s.ons.

  • PDF

Computation and Smoothing Parameter Selection In Penalized Likelihood Regression

  • Kim Young-Ju
    • Communications for Statistical Applications and Methods
    • /
    • v.12 no.3
    • /
    • pp.743-758
    • /
    • 2005
  • This paper consider penalized likelihood regression with data from exponential family. The fast computation method applied to Gaussian data(Kim and Gu, 2004) is extended to non Gaussian data through asymptotically efficient low dimensional approximations and corresponding algorithm is proposed. Also smoothing parameter selection is explored for various exponential families, which extends the existing cross validation method of Xiang and Wahba evaluated only with Bernoulli data.

Analyses of RFID System Using Lighted Weight Algorithm

  • Kim, Jung-Tae
    • Journal of information and communication convergence engineering
    • /
    • v.7 no.1
    • /
    • pp.19-23
    • /
    • 2009
  • In this paper, we propose a general idea about an RFID system which provides lighted weight algorithm. We discuss how RFID could be applied for this kind of system, especially, compact protocol. We evaluate a few protocols that have been suggested for use in passive RFID tagged systems. We can reduce security computation without losing security features by message integration and pre-computation in this paper. And the proposed protocol can be used in low-cost RFID systems that require a small computational load for both the back-end database and the tags.

AC Loss Calculation in High Temperature Superconductors Using Slab model (Slab모델을 이용한 HTS AC 손실의 계산)

  • 최세용;주진호;류경우;나완수
    • Proceedings of the Korea Institute of Applied Superconductivity and Cryogenics Conference
    • /
    • 2001.02a
    • /
    • pp.102-103
    • /
    • 2001
  • In this paper we calculate the AC Loss in the superconducting slab carrying ac transport current. Magnetic diffusion equation for computation of the electric field and current distribution are based on Maxwell's equations and non-linear constitutive equation. The E-J characteristics of superconductor are applied to computation. We will present the result of the high-temperature superconductor case comparison with the slab of low temperature superconductor.

  • PDF

Penalized Likelihood Regression: Fast Computation and Direct Cross-Validation

  • Kim, Young-Ju;Gu, Chong
    • Proceedings of the Korean Statistical Society Conference
    • /
    • 2005.05a
    • /
    • pp.215-219
    • /
    • 2005
  • We consider penalized likelihood regression with exponential family responses. Parallel to recent development in Gaussian regression, the fast computation through asymptotically efficient low-dimensional approximations is explored, yielding algorithm that scales much better than the O($n^3$) algorithm for the exact solution. Also customizations of the direct cross-validation strategy for smoothing parameter selection in various distribution families are explored and evaluated.

  • PDF

An Alternative One-Step Computation Approach for Computing Thermal Stress of Asphalt Mixture: the Laplace Transformation (새로운 아스팔트 혼합물의 저온응력 계산 기법에 대한 고찰: 라플라스 변환)

  • Moon, Ki Hoon;Kwon, Oh Sun;Cho, Mun Jin;Cannone, Falchetto Augusto
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.39 no.1
    • /
    • pp.219-225
    • /
    • 2019
  • Computing low temperature performance of asphalt mixture is one of the important tasks especially for cold regions. It is well known that experimental creep testing work is needed for computation of thermal stress and critical cracking temperature of given asphalt mixture. Thermal stress is conventionally computed through two steps of computation. First, the relaxation modulus is generated thorough the inter-conversion of the experimental creep stiffness data through the application of Hopkins and Hamming's algorithm. Secondly, thermal stress is numerically estimated solving the convolution integral. In this paper, one-step thermal stress computation methodology based on the Laplace transformation is introduced. After the extensive experimental works and comparisons of two different computation approaches, it is found that Laplace transformation application provides reliable computation results compared to the conventional approach: using two step computation with Hopkins and Hamming's algorithm.

Low Latency Algorithms for Iterative Codes

  • Choi, Seok-Soon;Jung, Ji-Won;Bae, Jong-Tae;Kim, Min-Hyuk;Choi, Eun-A
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.32 no.3C
    • /
    • pp.205-215
    • /
    • 2007
  • This paper presents low latency and/or computation algorithms of iterative codes of turbo codes, turbo product codes and low density parity check codes for use in wireless broadband communication systems. Due to high coding complexity of iterative codes, this paper focus on lower complexity and/or latency algorithms that are easily implementable in hardware and further accelerate the decoding speed.

Cryptanalysis and Improvement of a New Ultralightweight RFID Authentication Protocol with Permutation (순열을 사용한 새로운 초경량 RFID 인증 프로토콜에 대한 보안 분석 및 개선)

  • Jeon, Il-Soo;Yoon, Eun-Jun
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.17 no.6
    • /
    • pp.1-9
    • /
    • 2012
  • Low-cost RFID tags are used in many applications. However, since it has very limited power of computation and storage, it's not easy to make a RFID mutual authentication protocol which can resist from the various security attacks. Quite recently, Tian et al. proposed a new ultralightweight authentication protocol (RAPP) for low-cost RFID tags using the low computation cost operations; XOR, rotation, and permutation operations, which is able to resist from the various security attacks. In this paper, we show that RAPP is vulnerable to the de-synchronization attack and present an improved RAPP which overcomes the vulnerability of RAPP.

Quality Analysis on Computer Generated Hologram Depending on the Precision on Diffraction Computation (회절연산 정밀도에 따른 CGH 기반 홀로그램 생성 품질 분석)

  • Jaehong Lee;Duksu Kim
    • Journal of Broadcast Engineering
    • /
    • v.28 no.1
    • /
    • pp.21-30
    • /
    • 2023
  • Computer-generated holography requires much more computation costs and memory space rather than image processing. We implemented the diffraction calculation with low-precision and mixed-precision floating point numbers and compared the processing time and quality of the hologram with various precision. We compared diffraction quality with double, single and bfloat16 precision. bfloat16 shows 5.94x and 1.52x times faster performance than double precision and single precision. Also, bfloat16 shows lower PSNR and SSIM and higher MSE than other precision. However, there is no significant effect on reconstructed images. These results show low precision, like bfloat16, can be utilized for computer-generated holography.