• Title/Summary/Keyword: data complexity

Search Result 2,379, Processing Time 0.025 seconds

Computational Complexity Analysis of Cascade AOA Estimation Algorithm Based on FMCCA Antenna

  • Kim, Tae-yun;Hwang, Suk-seung
    • Journal of Positioning, Navigation, and Timing
    • /
    • v.11 no.2
    • /
    • pp.91-98
    • /
    • 2022
  • In the next generation wireless communication system, the beamforming technique based on a massive antenna is one of core technologies for transmitting and receiving huge amounts of data, efficiently and accurately. For highly performed and highly reliable beamforming, it is required to accurately estimate the Angle of Arrival (AOA) for the desired signal incident to an antenna. Employing the massive antenna with a large number of elements, although the accuracy of the AOA estimation is enhanced, its computational complexity is dramatically increased so much that real-time communication is difficult. In order to improve this problem, AOA estimation algorithms based on the massive antenna with the low computational complexity have been actively studied. In this paper, we compute and analyze the computational complexity of the cascade AOA estimation algorithm based on the Flexible Massive Concentric Circular Array (FMCCA). In addition, its computational complexity is compared to conventional AOA estimation techniques such as the Multiple Signal Classification (MUSIC) algorithm with the high resolution and the Only Beamspace MUSIC (OBM) algorithm.

A Study on the Propriety of the Medical Insurance Fee Schedule of Surgical Operations - In Regard to the Relative Price System and the Classification of the Price Unit of Insurance Fee Schedule - (수술수가의 적정성에 관한 연구 - 상대가격체계와 항목분류를 중심으로 -)

  • Oh Jin Joo
    • Journal of Korean Public Health Nursing
    • /
    • v.2 no.2
    • /
    • pp.21-44
    • /
    • 1988
  • In Korea, fee-for service reimbursement has been adopted from the begining of medical insurance system in 1977, and the importance of the relative value unit is currently being investigated. The purpose of this study was to find out the level of propriety of the difference in the fees for different surgical services, and the appropriateness of the classification of the insurance fee schedule. For the purpose of this study, specific subjects and the procedural methodology is shown as follows: 1. The propriety of the Relative Price System(RPS). 1) Choice of sample operations. In this study, sample operations were selected and classified by specialists in general surgery, and the number of items they classified were 32. For the same group of operations the Insurance Fee Schedule(IFS) classified the operations into 24 separate items. In order to investigate the propriety of the RPS, one of the purpose of this study, was to examine the 24 items classified by the IFS. 2) Evaluation of the complexity of surgery. The data used in this study was collected The data used in this study was collected from 94 specialists in general surgery by mail survey from November I to 15, 1986. Several independent variables (age, location, number of bed, university hospital, whether the medical institution adopt residents or not) were also investigated for analysis of the characteristics of surgical complexity. 3) Complexity and time calculations. Time data was collected from the records of the Seoul National University' Hospital, and the cost per operation was calculated through cost finding methods. 4) Analysis of the propriety of the Relative Price System of the Insurance Fee Schedule. The Relative Price System of the sample operation was regressed on the cost, time, comlexity relative ,value system (RVS) separately. The coefficient of determination indicates the degree of variation in the RPS of the Insurance Fee Schedule explained by the cost, time, complexity RVS separately. 2. The appropriateness of the classification of the Insurance Fee Schedule. 1) Choice of sample operations. The items which differed between the classification of the specialist and the classification of medical, Insurance Fee Schedule were chosen. 2) Comparisons of cost, time and complexity between the items were done to evaluate which classification was more appropriate. The findings of the study can be summarized as follows: 1. The coefficient of determination of the regression of the RPS on-cost RVS was 0.58, on time RVS was 0.65, and on complexity RVS was 0.72. This means that the RPS of Insurance Fee Schedule is improper with respect to the cost, time, complexity separately. Thus this indicates that RPS must be re-shaped according to the standard element. In this study, the correlation coefficients of cost, time, complexity Relative Value System were very high, and this suggests that RPS could be reshaped I according to anyone standard element. Considering of measurement, time was thought to be the most I appropriate. 2. The classifications of specialist and of the Insurance Fee Schedule were compared with respect to cost, time, and complexity separately. For complexity, ANOVA was done and the others were compared to the different values of different classifications. The result was that the classification of specialist was more reasonable and that the classification of Insurance Fee Schedule grouped inappropriately several into one price unit.

  • PDF

SOC Verification Based on WGL

  • Du, Zhen-Jun;Li, Min
    • Journal of Korea Multimedia Society
    • /
    • v.9 no.12
    • /
    • pp.1607-1616
    • /
    • 2006
  • The growing market of multimedia and digital signal processing requires significant data-path portions of SoCs. However, the common models for verification are not suitable for SoCs. A novel model--WGL (Weighted Generalized List) is proposed, which is based on the general-list decomposition of polynomials, with three different weights and manipulation rules introduced to effect node sharing and the canonicity. Timing parameters and operations on them are also considered. Examples show the word-level WGL is the only model to linearly represent the common word-level functions and the bit-level WGL is especially suitable for arithmetic intensive circuits. The model is proved to be a uniform and efficient model for both bit-level and word-level functions. Then Based on the WGL model, a backward-construction logic-verification approach is presented, which reduces time and space complexity for multipliers to polynomial complexity(time complexity is less than $O(n^{3.6})$ and space complexity is less than $O(n^{1.5})$) without hierarchical partitioning. Finally, a construction methodology of word-level polynomials is also presented in order to implement complex high-level verification, which combines order computation and coefficient solving, and adopts an efficient backward approach. The construction complexity is much less than the existing ones, e.g. the construction time for multipliers grows at the power of less than 1.6 in the size of the input word without increasing the maximal space required. The WGL model and the verification methods based on WGL show their theoretical and applicable significance in SoC design.

  • PDF

Traditional Korean Medicine Research Using Methods in Complexity Science: Current Status and Prospect (복잡계 과학 방법론을 활용한 한의학 연구: 현황과 전망)

  • Jang, Dongyeop;Cho, Na-Hyun;Lee, Ki-Eun;Kwon, Young-Kyu;Kim, Chang-Eop
    • Journal of Physiology & Pathology in Korean Medicine
    • /
    • v.35 no.5
    • /
    • pp.151-161
    • /
    • 2021
  • Traditional Korean medicine (TKM) takes a holistic view that emphasizes the balance between the elements constituting the human body or between the human body and the external environment. To investigate the holistic properties of TKM, here we propose to apply the methodology of complexity science to the TKM research. Complexity science is a discipline for studying complex systems with interactions between components that raise the behaviour as a whole which can be more than the sum of their parts. We first provide an introduction to the complexity science and its research methods, particularly focusing on network science and data science approaches. Next, we briefly present the current status of TKM research employing these methods. Finally, we provide suggestions for future research elucidating the underlying mechanism of TKM, both in terms of biomedicine and humanities.

Moderating Effect of Structural Complexity on the Relationship between Surgery Volume and in Hospital Mortality of Cancer Patients (일부 암 종의 수술량과 병원 내 사망률의 관계에서 구조적 복잡성의 조절효과)

  • Youn, Kyungil
    • Health Policy and Management
    • /
    • v.24 no.4
    • /
    • pp.380-388
    • /
    • 2014
  • Background: The volume of surgery has been examined as a major source of variation in outcome after surgery. This study investigated the direct effect of surgery volume to in hospitals mortality and the moderating effect of structural complexity-the level of diversity and sophistication of technology a hospital applied in patient care-to the volume outcome relationship. Methods: Discharge summary data of 11,827 cancer patients who underwent surgery and were discharged during a month period in 2010 and 2011 were analyzed. The analytic model included the independent variables such as surgery volume of a hospital, structural complexity measured by the number of diagnosis a hospital examined, and their interaction term. This study used a hierarchical logistic regression model to test for an association between hospital complexity and mortality rates and to test for the moderating effect in the volume outcome relationship. Results: As structural complexity increased the probability of in-hospital mortality after cancer surgery reduced. The interaction term between surgery volume and structural complexity was also statistically significant. The interaction effect was the strongest among the patients group who had surgery in low volume hospitals. Conclusion: The structural complexity and volume of surgery should be considered simultaneously in studying volume outcome relationship and in developing policies that aim to reduce mortality after cancer surgery.

N-Step Sliding Recursion Formula of Variance and Its Implementation

  • Yu, Lang;He, Gang;Mutahir, Ahmad Khwaja
    • Journal of Information Processing Systems
    • /
    • v.16 no.4
    • /
    • pp.832-844
    • /
    • 2020
  • The degree of dispersion of a random variable can be described by the variance, which reflects the distance of the random variable from its mean. However, the time complexity of the traditional variance calculation algorithm is O(n), which results from full calculation of all samples. When the number of samples increases or on the occasion of high speed signal processing, algorithms with O(n) time complexity will cost huge amount of time and that may results in performance degradation of the whole system. A novel multi-step recursive algorithm for variance calculation of the time-varying data series with O(1) time complexity (constant time) is proposed in this paper. Numerical simulation and experiments of the algorithm is presented and the results demonstrate that the proposed multi-step recursive algorithm can effectively decrease computing time and hence significantly improve the variance calculation efficiency for time-varying data, which demonstrates the potential value for time-consumption data analysis or high speed signal processing.

Frequency-Domain RLS Algorithm Based on the Block Processing Technique (블록 프로세싱 기법을 이용한 주파수 영역에서의 회귀 최소 자승 알고리듬)

  • 박부견;김동규;박원석
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2000.10a
    • /
    • pp.240-240
    • /
    • 2000
  • This paper presents two algorithms based on the concept of the frequency domain adaptive filter(FDAF). First the frequency domain recursive least squares(FRLS) algorithm with the overlap-save filtering technique is introduced. This minimizes the sum of exponentially weighted square errors in the frequency domain. To eliminate discrepancies between the linear convolution and the circular convolution, the overlap-save method is utilized. Second, the sliding method of data blocks is studied Co overcome processing delays and complexity roads of the FRLS algorithm. The size of the extended data block is twice as long as the filter tap length. It is possible to slide the data block variously by the adjustable hopping index. By selecting the hopping index appropriately, we can take a trade-off between the convergence rate and the computational complexity. When the input signal is highly correlated and the length of the target FIR filter is huge, the FRLS algorithm based on the block processing technique has good performances in the convergence rate and the computational complexity.

  • PDF

A new fractal image decoding algorithm with fast convergence speed (고속 수렴 속도를 갖는 새로운 프랙탈 영상 복호화 알고리듬)

  • 유권열;문광석
    • Journal of the Korean Institute of Telematics and Electronics S
    • /
    • v.34S no.8
    • /
    • pp.74-83
    • /
    • 1997
  • In this paper, we propose a new fractal image decoding algorithm with fast convergence speed by using the data dependence and the improved initial image estimation. Conventional method for fractal image decoding requires high-degrdd computational complexity in decoding process, because of iterated contractive transformations applied to whole range blocks. On proposed method, Range of reconstruction imagte is divided into referenced range and data dependence region. And computational complexity is reduced by application of iterated contractive transformations for the referenced range only. Data dependence region can be decoded by one transformations when the referenced range is converged. In addition, more exact initial image is estimated by using bound () function in case of all, and an initial image more nearer to a fixed point is estimated by using range block division estimation. Consequently, the convergence speed of reconstruction iamge is improved with 40% reduction of computational complexity.

  • PDF

Developing Stock Pattern Searching System using Sequence Alignment Algorithm (서열 정렬 알고리즘을 이용한 주가 패턴 탐색 시스템 개발)

  • Kim, Hyong-Jun;Cho, Hwan-Gue
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.37 no.6
    • /
    • pp.354-367
    • /
    • 2010
  • There are many methods for analyzing patterns in time series data. Although stock data represents a time series, there are few studies on stock pattern analysis and prediction. Since people believe that stock price changes randomly we cannot predict stock prices using a scientific method. In this paper, we measured the degree of the randomness of stock prices using Kolmogorov complexity, and we showed that there is a strong correlation between the degree and the accuracy of stock price prediction using our semi-global alignment method. We transformed the stock price data to quantized string sequences. Then we measured randomness of stock prices using Kolmogorov complexity of the string sequences. We use KOSPI 690 stock data during 28 years for our experiments and to evaluate our methodology. When a high Kolmogorov complexity, the stock price cannot be predicted, when a low complexity, the stock price can be predicted, but the prediction ratio of stock price changes of interest to investors, is 12% prediction ratio for short-term predictions and a 54% prediction ratio for long-term predictions.

Design of Low Complexity Human Anxiety Classification Model based on Machine Learning (기계학습 기반 저 복잡도 긴장 상태 분류 모델)

  • Hong, Eunjae;Park, Hyunggon
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.66 no.9
    • /
    • pp.1402-1408
    • /
    • 2017
  • Recently, services for personal biometric data analysis based on real-time monitoring systems has been increasing and many of them have focused on recognition of emotions. In this paper, we propose a classification model to classify anxiety emotion using biometric data actually collected from people. We propose to deploy the support vector machine to build a classification model. In order to improve the classification accuracy, we propose two data pre-processing procedures, which are normalization and data deletion. The proposed algorithms are actually implemented based on Real-time Traffic Flow Measurement structure, which consists of data collection module, data preprocessing module, and creating classification model module. Our experiment results show that the proposed classification model can infers anxiety emotions of people with the accuracy of 65.18%. Moreover, the proposed model with the proposed pre-processing techniques shows the improved accuracy, which is 78.77%. Therefore, we can conclude that the proposed classification model based on the pre-processing process can improve the classification accuracy with lower computation complexity.