• Title/Summary/Keyword: k number of least time

Search Result 300, Processing Time 0.026 seconds

UWB Pulse Generation Method for the FCC Emission Mask (FCC 방출 전력 마스크에 적합한 UWB 펄스 생성 방법)

  • Park, Jang-Woo;Cho, Sung-Eon;Cho, Kyung-Ryong
    • Journal of Advanced Navigation Technology
    • /
    • v.10 no.4
    • /
    • pp.333-341
    • /
    • 2006
  • This paper analyzes the spectral power properties of various time hopping UWB signals and shows that the power spectral densities of the various signals could have to be determined by the PSD of the pulse used in the signal. The pulse design method by which the FCC emission mask can be utilized fully is proposed. The method combines the arbitrary derivative Gaussian pulse linearly. The coefficients of the linear combination are calculated by the LSE(Least Square Error) method. Various parameters such as the number of coefficients and the types of the basic pulses are considered when calculating the PSD and pulse shapes of the new pulses.

  • PDF

Routing Algorithm with Adaptive Weight Function based on Possible Available Wavelength in Optical WDM Networks

  • Pavarangkoon, Praphan;Thipchaksurat, Sakchai;Varakulsiripunth, Ruttikorn
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2004.08a
    • /
    • pp.1338-1341
    • /
    • 2004
  • In this paper, we have proposed a new approach of routing and wavelength assignment algorithms, called Possible Available Wavelength (PAW) algorithm. The weight of a link is used as the main factor for routing decision in PAW algorithm. The weight of a link is defined as a function of hop count and available wavelengths. This function includes a determination factor of the number of wavelengths that are being used currently and are supposed to be available after a certain time. The session requests from users will be routed on the links that has the greatest number of link weight by using Dijkstra's shortest path algorithm. This means that the selected lightpath will has the least hop count and the greatest number of possible available wavelengths. The impact of proposed link weight computing function on the blocking probability and link utilization is investigated by means of computer simulation and comparing with the traditional mechanism. The results show that the proposed PAW algorithm can achieve the better performance in terms of the blocking probability and link utilization.

  • PDF

Rank-Based Nonlinear Normalization of Oligonucleotide Arrays

  • Park, Peter J.;Kohane, Isaac S.;Kim, Ju Han
    • Genomics & Informatics
    • /
    • v.1 no.2
    • /
    • pp.94-100
    • /
    • 2003
  • Motivation: Many have observed a nonlinear relationship between the signal intensity and the transcript abundance in microarray data. The first step in analyzing the data is to normalize it properly, and this should include a correction for the nonlinearity. The commonly used linear normalization schemes do not address this problem. Results: Nonlinearity is present in both cDNA and oligonucleotide arrays, but we concentrate on the latter in this paper. Across a set of chips, we identify those genes whose within-chip ranks are relatively constant compared to other genes of similar intensity. For each gene, we compute the sum of the squares of the differences in its within-chip ranks between every pair of chips as our statistic and we select a small fraction of the genes with the minimal changes in ranks at each intensity level. These genes are most likely to be non-differentially expressed and are subsequently used in the normalization procedure. This method is a generalization of the rank-invariant normalization (Li and Wong, 2001), using all available chips rather than two at a time to gather more information, while using the chip that is least likely to be affected by nonlinear effects as the reference chip. The assumption in our method is that there are at least a small number of non­differentially expressed genes across the intensity range. The normalized expression values can be substantially different from the unnormalized values and may result in altered down-stream analysis.

Gauss-Newton Based Emitter Location Method Using Successive TDOA and FDOA Measurements (연속 측정된 TDOA와 FDOA를 이용한 Gauss-Newton 기법 기반의 신호원 위치추정 방법)

  • Kim, Yong-Hee;Kim, Dong-Gyu;Han, Jin-Woo;Song, Kyu-Ha;Kim, Hyoung-Nam
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.50 no.7
    • /
    • pp.76-84
    • /
    • 2013
  • In the passive emitter localization using instantaneous TDOA (time difference of arrival) and FDOA (frequency difference of arrival) measurements, the estimation accuracy can be improved by collecting additional measurements. To achieve this goal, it is required to increase the number of the sensors. However, in electronic warfare environment, a large number of sensors cause the loss of military strength due to high probability of intercept. Also, the additional processes should be considered such as the data link and the clock synchronization between the sensors. Hence, in this paper, the passive localization of a stationary emitter is presented by using the successive TDOA and FDOA measurements from two moving sensors. In this case, since an independent pair of sensors is added in the data set at every instant of measurement, each pair of sensors does not share the common reference sensor. Therefore, the QCLS (quadratic correction least squares) methods cannot be applied, in which all pairs of sensor should include the common reference sensor. For this reason, a Gauss-Newton algorithm is adopted to solve the non-linear least square problem. In addition, to show the performance of the proposed method, we compare the RMSE (root mean square error) of the estimates with CRLB (Cramer-Rao lower bound) and derived the CEP (circular error probable) planes to analyze the expected estimation performance on the 2-dimensional space.

A Fault Tolerant Control Technique for Hybrid Modular Multi-Level Converters with Fault Detection Capability

  • Abdelsalam, Mahmoud;Marei, Mostafa Ibrahim;Diab, Hatem Yassin;Tennakoon, Sarath B.
    • Journal of Power Electronics
    • /
    • v.18 no.2
    • /
    • pp.558-572
    • /
    • 2018
  • In addition to its modular nature, a Hybrid Modular Multilevel Converter (HMMC) assembled from half-bridge and full-bridge sub-modules, is able to block DC faults with a minimum number of switching devices, which makes it attractive for high power applications. This paper introduces a control strategy based on the Root-Least Square (RLS) algorithm to estimate the capacitor voltages instead of using direct measurements. This action eliminates the need for voltage transducers in the HMMC sub-modules and the associated communication link with the central controller. In addition to capacitor voltage balancing and suppression of circulating currents, a fault tolerant control unit (FTCU) is integrated into the proposed strategy to modify the parameters of the HMMC controller. On advantage of the proposed FTCU is that it does not need extra components. Furthermore, a fault detection unit is adapted by utilizing a hybrid estimation scheme to detect sub-module faults. The behavior of the suggested technique is assessed using PSCAD offline simulations. In addition, it is validated using a real-time digital simulator connected to a real time controller under various normal and fault conditions. The proposed strategy shows robust performance in terms of accuracy and time response since it succeeds in stabilizing the HMMC under faults.

Identification of Fuzzy Inference Systems Using a Multi-objective Space Search Algorithm and Information Granulation

  • Huang, Wei;Oh, Sung-Kwun;Ding, Lixin;Kim, Hyun-Ki;Joo, Su-Chong
    • Journal of Electrical Engineering and Technology
    • /
    • v.6 no.6
    • /
    • pp.853-866
    • /
    • 2011
  • We propose a multi-objective space search algorithm (MSSA) and introduce the identification of fuzzy inference systems based on the MSSA and information granulation (IG). The MSSA is a multi-objective optimization algorithm whose search method is associated with the analysis of the solution space. The multi-objective mechanism of MSSA is realized using a non-dominated sorting-based multi-objective strategy. In the identification of the fuzzy inference system, the MSSA is exploited to carry out parametric optimization of the fuzzy model and to achieve its structural optimization. The granulation of information is attained using the C-Means clustering algorithm. The overall optimization of fuzzy inference systems comes in the form of two identification mechanisms: structure identification (such as the number of input variables to be used, a specific subset of input variables, the number of membership functions, and the polynomial type) and parameter identification (viz. the apexes of membership function). The structure identification is developed by the MSSA and C-Means, whereas the parameter identification is realized via the MSSA and least squares method. The evaluation of the performance of the proposed model was conducted using three representative numerical examples such as gas furnace, NOx emission process data, and Mackey-Glass time series. The proposed model was also compared with the quality of some "conventional" fuzzy models encountered in the literature.

Subsidence of Cylindrical Cage ($AMSLU^{TM}$ Cage) : Postoperative 1 Year Follow-up of the Cervical Anterior Interbody Fusion

  • Joung, Young-Il;Oh, Seong-Hoon;Ko, Yong;Yi, Hyeong-Joong;Lee, Seung-Ku
    • Journal of Korean Neurosurgical Society
    • /
    • v.42 no.5
    • /
    • pp.367-370
    • /
    • 2007
  • Objective : There are numerous reports on the primary stabilizing effects of the different cervical cages for cervical radiculopathy. But, little is known about the subsidence which may be clinical problem postoperatively. The goal of this study is to evaluate subsidence of cage and investigate the correlation between radiologic subsidence and clinical outcome. Methods : To assess possible subsidence, the authors investigated clinical and radiological results of the one-hundred patients who underwent anterior cervical fusion by using $AMSLU^{TM}$ cage during the period between January 2003 and June 2005. Preoperative and postoperative lateral radiographs were measured for height of intervertebral disc space where cages were placed intervertebral disc space was measured by dividing the sum of anterior, posterior, and midpoint interbody distance by 3. Follow-up time was 6 to 12 months. Subsidence was defined as any change in at least one of our parameters of at least 3 mm. Results : Subsidence was found in 22 patients (22%). The mean value of subsidence was 2.21 mm, and mean subsidence rate was 22%. There were no cases of the clinical status deterioration during the follow-up period No posterior or anterior migration was observed. Conclusion : The phenomenon of subsidence is seen in substantial number of patients. Nevertheless, clinical and radiological results of the surgery were favorable. An excessive subsidence may result in hardware failure. Endplate preservation may enables us to control subsidence and reduce the number of complications.

An Efficient CPLD Technology Mapping considering Area under Time Constraint (시간 제약 조건하에서 면적을 고려한 효율적인 CPLD 기술 매핑)

  • Kim, Jae-Jin;Kim, Hui-Seok
    • Journal of the Institute of Electronics Engineers of Korea SD
    • /
    • v.38 no.1
    • /
    • pp.79-85
    • /
    • 2001
  • In this paper, we propose a new technology mapping algorithm for CPLD consider area under time constraint(TMFCPLD). This technology mapping algorithm detect feedbacks from boolean networks, then variables that have feedback are replaced to temporary variables. Creating the temporary variables transform sequential circuit to combinational circuit. The transformed circuits are represented to DAG. After traversing all nodes in DAG, the nodes that have output edges more than two are replicated and reconstructed to fanout free tree. This method is for reason to reduce area and improve total run time of circuits by TEMPLA proposed previously. Using time constraints and delay time of device, the number of graph partitionable multi-level is decided. Initial cost of each node are the number of OR-terms that it have. Among mappable clusters, clusters of which the number of multi-level is least is selected, and the graph is partitioned. Several nodes in partitioned clusters are merged by collapsing, and are fitted to the number of OR-terms in a given CLB by bin packing. Proposed algorithm have been applied to MCNC logic synthesis benchmark circuits, and have reduced the number of CLBs by 62.2% than those of DDMAP. And reduced the number of CLBs by 17.6% than those of TEMPLA, and reduced the number of CLBs by 4.7% than those of TMCPLD. This results will give much efficiency to technology mapping for CPLDs.

  • PDF

The Improvement of Convergence Characteristic using the New RLS Algorithm in Recycling Buffer Structures

  • Kim, Gwang-Jun;Kim, Chun-Suck
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.7 no.4
    • /
    • pp.691-698
    • /
    • 2003
  • We extend the sue of the method of least square to develop a recursive algorithm for the design of adaptive transversal filters such that, given the least-square estimate of this vector of the filter at iteration n-l, we may compute the updated estimate of this vector at iteration n upon the arrival of new data. We begin the development of the RLS algorithm by reviewing some basic relations that pertain to the method of least squares. Then, by exploiting a relation in matrix algebra known as the matrix inversion lemma, we develop the RLS algorithm. An important feature of the RLS algorithm is that it utilizes information contained in the input data, extending back to the instant of time when the algorithm is initiated. In this paper, we propose new tap weight updated RLS algorithm in adaptive transversal filter with data-recycling buffer structure. We prove that convergence speed of learning curve of RLS algorithm with data-recycling buffer is faster than it of exiting RLS algorithm to mean square error versus iteration number. Also the resulting rate of convergence is typically an order of magnitude faster than the simple LMS algorithm. We show that the number of desired sample is portion to increase to converge the specified value from the three dimension simulation result of mean square error according to the degree of channel amplitude distortion and data-recycle buffer number. This improvement of convergence character in performance, is achieved at the B times of convergence speed of mean square error increase in data recycle buffer number with new proposed RLS algorithm.

The Effect of Dual-task Training on a Serial Reaction Time Task for Motor Learning

  • Choi, Jin-Ho;Park, So Hyun
    • The Journal of Korean Physical Therapy
    • /
    • v.24 no.6
    • /
    • pp.405-408
    • /
    • 2012
  • Purpose: We examined the effect of dual-task and single-task training on serial reaction time (SRT) task performance to determine whether SRT is based more on motor or perception in a dual-task. Methods: Forty healthy adults were divided into two groups: the dual-task group (mean age, $21.8{\pm}1.6$ years) and the single-task group (mean age, $21.7{\pm}1.6$ years). SRT task was conducted total 480 trial. The four figures were presented randomly 16 times. A unit was set as 1 block that would repeat 10 times. Thus, there were a total of 160 trials for each of the three color conditions. The dual-task group performed an SRT task while detecting the color of a specific shape. The end of the task, subjects answered the specific shape number; the single-task group only performed the SRT task. The study consisted of three parts: pre-measurement, task performance, and post-measurement. Results: Differences of pre and post reaction time between two group was higher for the dual-task group as compared to the single task group and there was a significant interaction between time and group (p<0.05). Conclusion: Our results indicate that. short term period SRT is not quiet effective under dual-task conditions, individuals need additional cognitive processes to successfully navigate a task This suggests that dual-task training might not be appropriate for motor learning enhancement, at least when the training is over a short period.