• 제목/요약/키워드: data complexity

검색결과 2,379건 처리시간 0.032초

MIMO-OFDM 시스템을 위한 고속 저면적 128/64-point $Radix-2^4$ FFT 프로세서 설계 (A High-Speed Low-Complexity 128/64-point $Radix-2^4$ FFT Processor for MIMO-OFDM Systems)

  • 리우 항;이한호
    • 대한전자공학회논문지SD
    • /
    • 제46권2호
    • /
    • pp.15-23
    • /
    • 2009
  • 본 논문은 높은 데이터 처리율을 요하는 MIMO-OFDM 시스템을 위하여 고속의 낮은 하드웨어 복잡도를 가진 128/64-point $radix-2^4$ FFT/IFFT 프로세서 설계에 대해 제안한다. 높은 Radix 다중경로 지연 피드백 (MDF) FFT구조는 고속의 데이터 처리율과 낮은 하드웨어 복잡도를 제공한다. 제안하는 프로세서는 128-point와 64 Point FFT/IFFT의 동작을 지원할 뿐만 아니라 4-병렬 데이터 경로를 사용함으로써 높은 데이터 처리율을 지원한다. 또한, 제안하는 프로세서는 기존의 128/64-point FFT/IFFT 프로세서에 비해 낮은 하드웨어 복잡도를 지닌다. 제안된 FFT/IFFT 프로세서는 IEEE 802.11n 표준의 요구사항을 만족시키며 140MHz 클락 속도에서 560MSample/s의 높은 데이터 처리율을 가진다.

A Fast and Secure Scheme for Data Outsourcing in the Cloud

  • Liu, Yanjun;Wu, Hsiao-Ling;Chang, Chin-Chen
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제8권8호
    • /
    • pp.2708-2721
    • /
    • 2014
  • Data outsourcing in the cloud (DOC) is a promising solution for data management at the present time, but it could result in the disclosure of outsourced data to unauthorized users. Therefore, protecting the confidentiality of such data has become a very challenging issue. The conventional way to achieve data confidentiality is to encrypt the data via asymmetric or symmetric encryptions before outsourcing. However, this is computationally inefficient because encryption/decryption operations are time-consuming. In recent years, a few DOC schemes based on secret sharing have emerged due to their low computational complexity. However, Dautrich and Ravishankar pointed out that most of them are insecure against certain kinds of collusion attacks. In this paper, we proposed a novel DOC scheme based on Shamir's secret sharing to overcome the security issues of these schemes. Our scheme can allow an authorized data user to recover all data files in a specified subset at once rather than one file at a time as required by other schemes that are based on secret sharing. Our thorough analyses showed that our proposed scheme is secure and that its performance is satisfactory.

달리기 속도 증가에 따른 성별 CoP (Center of Pressure)의 복잡성 패턴 (Complexity Pattern of Center of Pressure between Genders via Increasing Running Speed)

  • Ryu, Jiseon
    • 한국운동역학회지
    • /
    • 제29권4호
    • /
    • pp.247-254
    • /
    • 2019
  • Objective: The goal of this study was to determine the center of pressure (CoP) complexity pattern in approximate entropy technique between genders at different conditions of running speed. Background: It is conducted to evaluate the complexity pattern of CoP in the increment of running speed to have insights to injury prediction, stability, and auxiliary aids for the foot. Method: Twenty men (age=22.3±1.5 yrs.; height=176.4±5.4 cm; body weight=73.9±8.2 kg) and Twenty women (age=20.8±1.2 yrs.; height=162.8±5.2 cm; body weight=55.0±6.3 kg) with heel strike pattern were recruited for the study. While they were running at 2.22, 3.33, 4.44 m/s speed on a treadmill (instrumented dual belt treadmills, USA) with a force plate, CoP data were collected for the 10 strides. The complexity pattern of the CoP was analyzed using the ApEn technique. Results: The ApEn of the medial-lateral and antero-posterior CoP in the increment of running speed showed significantly difference within genders (p<.05), but there were not statistically significant between genders at all conditions of running speed. Conclusion: Based on the results of this study, CoP complexity pattern in the increment of running speed was limited to be characterized between genders as an indicator to judge the potential injury and stability. Application: In future studies, it is needed to investigate the cause of change for complexity of CoP at various running speed related to this study.

아동기 대인관계 외상, 정신 증상의 복잡성 및 해리의 매개 효과 (The Association Between Childhood Interpersonal Trauma and Psychiatric Symptom Complexity, and the Mediating Impact of Dissociation)

  • 김예슬;김석현;김대호;김은경;김지영;최나연
    • 대한불안의학회지
    • /
    • 제18권2호
    • /
    • pp.72-79
    • /
    • 2022
  • Objective : Any traumatic event can be a risk factor, for subsequent mental disorder. However, childhood trauma, especially in interpersonal nature, is associated with later development of complex symptom patterns. This study examined the role of dissociation as a mediator between childhood trauma and symptom complexity. Methods : A pooled data of 369 psychiatric outpatients at a university-affiliated hospital was analyzed for descriptive statistics, group differences, and bivariate correlation analysis to verify a structural model. The questionnaires included the Symptom Checklist-90-Revised, the Trauma History Screen, the Dissociative Experiences Scale-Taxon, the Beck Depression Inventory, the Beck Anxiety Inventory, and the Abbreviated PTSD checklist. Results : When other trauma variables were controlled, childhood interpersonal trauma had significant correlation with symptom complexity (r=0.155, p=0.003). Among the paths analyzed, that of childhood interpersonal trauma and dissociation showed the greatest impact on symptom complexity (b=9.34, t=5.75, p<0.001). Based on the significance of the indirect impact, the results suggest a complete mediation impact of dissociation on symptom complexity. Conclusion : This study validated that childhood interpersonal trauma impacts symptom complexity, through the sequential mediating impact of dissociation. Thus, clinicians should understand childhood interpersonal trauma, dissociation, and symptom patterns in a complex and interacting mode, and develop effective pertinent treatment strategies.

태양광 발전량 데이터의 시계열 모델 적용을 위한 결측치 보간 방법 연구 (A Research for Imputation Method of Photovoltaic Power Missing Data to Apply Time Series Models)

  • 정하영;홍석훈;전재성;임수창;김종찬;박철영
    • 한국멀티미디어학회논문지
    • /
    • 제24권9호
    • /
    • pp.1251-1260
    • /
    • 2021
  • This paper discusses missing data processing using simple moving average (SMA) and kalman filter. Also SMA and kalman predictive value are made a comparative study. Time series analysis is a generally method to deals with time series data in photovoltaic field. Photovoltaic system records data irregularly whenever the power value changes. Irregularly recorded data must be transferred into a consistent format to get accurate results. Missing data results from the process having same intervals. For the reason, it was imputed using SMA and kalman filter. The kalman filter has better performance to observed data than SMA. SMA graph is stepped line graph and kalman filter graph is a smoothing line graph. MAPE of SMA prediction is 0.00737%, MAPE of kalman prediction is 0.00078%. But time complexity of SMA is O(N) and time complexity of kalman filter is O(D2) about D-dimensional object. Accordingly we suggest that you pick the best way considering computational power.

완전동형암호 연산 가속 하드웨어 기술 동향 (Trends in Hardware Acceleration Techniques for Fully Homomorphic Encryption Operations)

  • 박성천;김현우;오유리;나중찬
    • 전자통신동향분석
    • /
    • 제36권6호
    • /
    • pp.1-12
    • /
    • 2021
  • As the demand for big data and big data-based artificial intelligence (AI) technology increases, the need for privacy preservations for sensitive information contained in big data and for high-speed encryption-based AI computation systems also increases. Fully homomorphic encryption (FHE) is a representative encryption technology that preserves the privacy of sensitive data. Therefore, FHE technology is being actively investigated primarily because, with FHE, decryption of the encrypted data is not required in the entire data flow. Data can be stored, transmitted, combined, and processed in an encrypted state. Moreover, FHE is based on an NP-hard problem (Lattice problem) that cannot be broken, even by a quantum computer, because of its high computational complexity and difficulty. FHE boasts a high-security level and therefore is receiving considerable attention as next-generation encryption technology. However, despite being able to process computations on encrypted data, the slow computation speed due to the high computational complexity of FHE technology is an obstacle to practical use. To address this problem, hardware technology that accelerates FHE operations is receiving extensive research attention. This article examines research trends associated with developments in hardware technology focused on accelerating the operations of representative FHE schemes. In addition, the detailed structures of hardware that accelerate the FHE operation are described.

음성 및 데이터 서비스를 지원하는 다중 반송파 코드 분할 다중 접속방식 시스템의 얼랑 용량 분석 (Analysis of Erlang Capacity for Multi-FA CDMA Systems Supporting Voice and Data Services)

  • 구인수;양정록;김태엽;김기선
    • 대한전자공학회:학술대회논문집
    • /
    • 대한전자공학회 2000년도 하계종합학술대회 논문집(1)
    • /
    • pp.37-40
    • /
    • 2000
  • As the number of CDMA subscribers increases, CDMA systems utilize more than one CDMA carrier In order to accommodate Increasing capacity requirement. In this paper, we present a new analytical method for evaluating the Erlang capacity of CDMA systems with multiple CDMA carriers. in the case of the algorithm proposed in 〔5〕, the calculation complexity for evaluating the call blocking probability Is increased proportionally to the sixth power of the number of used CDMA carriers when the CDMA system supports voice and data services. Consequently, It is Impractical to calculate Erlang capacity with the algorithm of 〔5〕especially when the number of used CDMA carriers is larger than 3. To resolve this problem, we propose a new analytical method for evaluating the Erlang capacity. The calculation complexity of the proposed method for evaluating call blocking probability is increased just proportionally to the second power of the number of used CDMA carriers when the CDMA systems support voice and data services.

  • PDF

유전 알고리즘에 기반한 코드 난독화를 위한 인라인 적용 기법 (A Technique to Apply Inlining for Code Obfuscation based on Genetic Algorithm)

  • 김정일;이은주
    • 한국IT서비스학회지
    • /
    • 제10권3호
    • /
    • pp.167-177
    • /
    • 2011
  • Code obfuscation is a technique that protects the abstract data contained in a program from malicious reverse engineering and various obfuscation methods have been proposed for obfuscating intention. As the abstract data of control flow about programs is important to clearly understand whole program, many control flow obfuscation transformations have been introduced. Generally, inlining is a compiler optimization which improves the performance of programs by reducing the overhead of calling invocation. In code obfuscation, inlining is used to protect the abstract data of control flow. In this paper, we define new control flow complexity metric based on entropy theory and N-Scope metric, and then apply genetic algorithm to obtain optimal inlining results, based on the defined metric.

Distributed Estimation Using Non-regular Quantized Data

  • Kim, Yoon Hak
    • Journal of information and communication convergence engineering
    • /
    • 제15권1호
    • /
    • pp.7-13
    • /
    • 2017
  • We consider a distributed estimation where many nodes remotely placed at known locations collect the measurements of the parameter of interest, quantize these measurements, and transmit the quantized data to a fusion node; this fusion node performs the parameter estimation. Noting that quantizers at nodes should operate in a non-regular framework where multiple codewords or quantization partitions can be mapped from a single measurement to improve the system performance, we propose a low-weight estimation algorithm that finds the most feasible combination of codewords. This combination is found by computing the weighted sum of the possible combinations whose weights are obtained by counting their occurrence in a learning process. Otherwise, tremendous complexity will be inevitable due to multiple codewords or partitions interpreted from non-regular quantized data. We conduct extensive experiments to demonstrate that the proposed algorithm provides a statistically significant performance gain with low complexity as compared to typical estimation techniques.

고속 행렬 전치를 위한 효율적인 VLSI 구조 (An efficient VLSI architecture for high speed matrix transpositio)

  • 김견수;장순화;김재호;손경식
    • 한국통신학회논문지
    • /
    • 제21권12호
    • /
    • pp.3256-3264
    • /
    • 1996
  • This paper presents an efficient VLSI architecture for transposing matris in high speed. In the case of transposing N*N matrix, N$^{2}$ numbers of transposition cells are configured as regular and spuare shaped structure, and pipeline structure for operating each transposition cell in paralle. Transposition cell consists of register and input data selector. The characteristic of this architecture is that the data to be transposed are divided into several bundles of bits, then processed serially. Using the serial transposition of divided input data, hardware complexity of transpositioncell can be reduced, and routing between adjacent transposition cells can be simple. the proposed architecture is designed and implemented with 0.5 .mu.m VLSI library. As a result, it shows stable operation in 200 MHz and less hardware complexity than conventional architectures.

  • PDF