• Title/Summary/Keyword: Uniform Partitioning

Search Result 31, Processing Time 0.024 seconds

Calculation of the coupling coefficient for trapezoidal gratings using extended additional layer method (확장된 새로운 층 방법을 이용한 사다리꼴 회절격자의 결합계수 계산)

  • 조성찬;김부균;김용곤
    • Korean Journal of Optics and Photonics
    • /
    • v.7 no.3
    • /
    • pp.207-212
    • /
    • 1996
  • We propose an extended additional layer method (EALM) of calculating the coupling coefficient of arbitrary shaped diffraction gratings. In EALM, to determine the unperturbed field distribtution, a grating region is replaced by a new uniform layer whose dielectric constant is the average value of the dielectric constant of a grating region in both longitudinal and transverse directions. Using this method, we calculate the coupling coefficient for a five-layer distributed feedback structure device with trapezoidal and triangular gratings. The validity of this method is established by comparing the results calculated by partitioning the grating region up to five uniform layers.

  • PDF

Revision of ART with Iterative Partitioning for Performance Improvement (입력 도메인 반복 분할 기법 성능 향상을 위한 고려 사항 분석)

  • Shin, Seung-Hun;Park, Seung-Kyu;Jung, Ki-Hyun
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.46 no.3
    • /
    • pp.64-76
    • /
    • 2009
  • Adaptive Random Testing through Iterative Partitioning(IP-ART) is one of Adaptive Random Testing(ART) techniques. IP-ART uses an iterative partitioning method for input domain to improve the performances of early-versions of ART that have significant drawbacks in computation time. Another version of IP-ART, named with EIP-ART(IP-ART with Enlarged Input Domain), uses virtually enlarged input domain to remove the unevenly distributed parts near the boundary of the domain. EIP-ART could mitigate non-uniform test case distribution of IP-ART and achieve relatively high performances in a variety of input domain environments. The EIP-ART algorithm, however, have the drawback of higher computation time to generate test cases mainly due to the additional workload from enlarged input domain. For this reason, a revised version of IP-ART without input domain enlargement needs to improve the distribution of test cases to remove the additional time cost. We explore three smoothing algorithms which influence the distribution of test cases, and analyze to check if any performance improvements take place by them. The simulation results show that the algorithm of a restriction area management achieves better performance than other ones.

Dynamic Analysis of a Moving Vehicle on Flexible Beam structures ( I ) : General Approach

  • Park, Tae-Won;Park, Chan-Jong
    • International Journal of Precision Engineering and Manufacturing
    • /
    • v.3 no.4
    • /
    • pp.54-63
    • /
    • 2002
  • In recent years, mechanical systems such as high speed vehicles and railway trains moving on elastic beam structures have become a very important issue to consider. In this paper, a general approach, which can predict the dynamic behavior of a constrained mechanical system moving on a flexible beam structure, is proposed. Various supporting conditions for the foundation support are considered for the elastic beam structure. The elastic structure is assumed to be a non-uniform and linear Bernoulli-Euler beam with a proportional damping effect. Combined differential-algebraic equation of motion is derived using the multi-body dynamics theory and the finite element method. The proposed equations of motion can be solved numerically using the generalized coordinate partitioning method and predictor-corrector algorithm, which is an implicit multi-step integration method.

Composite adaptive neural network controller for nonlinear systems (비선형 시스템제어를 위한 복합적응 신경회로망)

  • 김효규;오세영;김성권
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1993.10a
    • /
    • pp.14-19
    • /
    • 1993
  • In this paper, we proposed an indirect learning and direct adaptive control schemes using neural networks, i.e., composite adaptive neural control, for a class of continuous nonlinear systems. With the indirect learning method, the neural network learns the nonlinear basis of the system inverse dynamics by a modified backpropagation learning rule. The basis spans the local vector space of inverse dynamics with the direct adaptation method when the indirect learning result is within a prescribed error tolerance, as such this method is closely related to the adaptive control methods. Also hash addressing technique, similar to the CMAC functional architecture, is introduced for partitioning network hidden nodes according to the system states, so global neuro control properties can be organized by the local ones. For uniform stability, the sliding mode control is introduced when the neural network has not sufficiently learned the system dynamics. With proper assumptions on the controlled system, global stability and tracking error convergence proof can be given. The performance of the proposed control scheme is demonstrated with the simulation results of a nonlinear system.

  • PDF

SOC Verification Based on WGL

  • Du, Zhen-Jun;Li, Min
    • Journal of Korea Multimedia Society
    • /
    • v.9 no.12
    • /
    • pp.1607-1616
    • /
    • 2006
  • The growing market of multimedia and digital signal processing requires significant data-path portions of SoCs. However, the common models for verification are not suitable for SoCs. A novel model--WGL (Weighted Generalized List) is proposed, which is based on the general-list decomposition of polynomials, with three different weights and manipulation rules introduced to effect node sharing and the canonicity. Timing parameters and operations on them are also considered. Examples show the word-level WGL is the only model to linearly represent the common word-level functions and the bit-level WGL is especially suitable for arithmetic intensive circuits. The model is proved to be a uniform and efficient model for both bit-level and word-level functions. Then Based on the WGL model, a backward-construction logic-verification approach is presented, which reduces time and space complexity for multipliers to polynomial complexity(time complexity is less than $O(n^{3.6})$ and space complexity is less than $O(n^{1.5})$) without hierarchical partitioning. Finally, a construction methodology of word-level polynomials is also presented in order to implement complex high-level verification, which combines order computation and coefficient solving, and adopts an efficient backward approach. The construction complexity is much less than the existing ones, e.g. the construction time for multipliers grows at the power of less than 1.6 in the size of the input word without increasing the maximal space required. The WGL model and the verification methods based on WGL show their theoretical and applicable significance in SoC design.

  • PDF

Efficient Partitioning of Matched Filter for Long Pulse in Active Sonar Application (능동 소나에서 시간적으로 긴 펄스에 대한 정합 필터의 효율적인 분할 기법)

  • Shin, Donghoon;Kim, Jin Seok
    • The Journal of the Acoustical Society of Korea
    • /
    • v.33 no.4
    • /
    • pp.262-267
    • /
    • 2014
  • Recently, long pulses are transmitted for target detection in active sonar application. Matched filtering implemented by simple convolution algorithm, requires massive computational power for long replica. The computational loads are reduced significantly by implementing the convolution in the frequency domain with overlap add method, but the performance degrades for specified input/output system delay which constrains the size of FFT function. For performance improvement, the replica could be partitioned into uniform blocks (FDL) by re-using IFFT operations, or variable blocks of increasing length (MC) by using the largest possible blocks to calculate the convolution. In this paper, by combining the strong points of the two methods, we propose a new filter partition structure that allows for further optimization of the previous two methods.

Improved Expectation and Maximization via a New Method for Initial Values (새로운 초기치 선정 방법을 이용한 향상된 EM 알고리즘)

  • Kim, Sung-Soo;Kang, Jee-Hye
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.13 no.4
    • /
    • pp.416-426
    • /
    • 2003
  • In this paper we propose a new method for choosing the initial values of Expectation-Maximization(EM) algorithm that has been used in various applications for clustering. Conventionally, the initial values were chosen randomly, which sometimes yields undesired local convergence. Later, K-means clustering method was employed to choose better initial values, which is currently widely used. However the method using K-means still has the same problem of converging to local points. In order to resolve this problem, a new method of initializing values for the EM process. The proposed method not only strengthens the characteristics of EM such that the number of iteration is reduced in great amount but also removes the possibility of falling into local convergence.

A New Fast EM Algorithm (새로운 고속 EM 알고리즘)

  • 김성수;강지혜
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.31 no.10
    • /
    • pp.575-587
    • /
    • 2004
  • In this paper. a new Fast Expectation-Maximization algorithm(FEM) is proposed. Firstly the K-means algorithm is modified to reduce the number of iterations for finding the initial values that are used as the initial values in EM process. Conventionally the Initial values in K-means clustering are chosen randomly. which sometimes forces the process of clustering converge to some undesired center points. Uniform partitioning method is added to the conventional K-means to extract the proper initial points for each clusters. Secondly the effect of posterior probability is emphasized such that the application of Maximum Likelihood Posterior(MLP) yields fast convergence. The proposed FEM strengthens the characteristics of conventional EM by reinforcing the speed of convergence. The superiority of FEM is demonstrated in experimental results by presenting the improvement results of EM and accelerating the speed of convergence in parameter estimation procedures.

Parallel Rendering of High Quality Animation based on a Dynamic Workload Allocation Scheme (작업영역의 동적 할당을 통한 고화질 애니메이션의 병렬 렌더링)

  • Rhee, Yun-Seok
    • Journal of the Korea Society of Computer and Information
    • /
    • v.13 no.1
    • /
    • pp.109-116
    • /
    • 2008
  • Even though many studies on parallel rendering based on PC clusters have been done. most of those did not cope with non-uniform scenes, where locations of 3D models are biased. In this work. we have built a PC cluster system with POV-Ray, a free rendering software on the public domain, and developed an adaptive load balancing scheme to optimize the parallel efficiency Especially, we noticed that a frame of 3D animation are closely coherent with adjacent frames. and thus we could estimate distribution of computation amount, based on the computation time of previous frame. The experimental results with 2 real animation data show that the proposed scheme reduces by 40% of execution time compared to the simple static partitioning scheme.

  • PDF

SPIHT-based Subband Division Compression Method for High-resolution Image Compression (고해상도 영상 압축을 위한 SPIHT 기반의 부대역 분할 압축 방법)

  • Kim, Woosuk;Park, Byung-Seo;Oh, Kwan-Jung;Seo, Young-Ho
    • Journal of Broadcast Engineering
    • /
    • v.27 no.2
    • /
    • pp.198-206
    • /
    • 2022
  • This paper proposes a method to solve problems that may occur when SPIHT(set partition in hierarchical trees) is used in a dedicated codec for compressing complex holograms with ultra-high resolution. The development of codecs for complex holograms can be largely divided into a method of creating dedicated compression methods and a method of using anchor codecs such as HEVC and JPEG2000 and adding post-processing techniques. In the case of creating a dedicated compression method, a separate conversion tool is required to analyze the spatial characteristics of complex holograms. Zero-tree-based algorithms in subband units such as EZW and SPIHT have a problem that when coding for high-resolution images, intact subband information is not properly transmitted during bitstream control. This paper proposes a method of dividing wavelet subbands to solve such a problem. By compressing each divided subbands, information throughout the subbands is kept uniform. The proposed method showed better restoration results than PSNR compared to the existing method.