• Title/Summary/Keyword: Complex Data-Algorithm Data Processing

Search Result 151, Processing Time 0.02 seconds

Study on Data Control System Design Method with Complex Data-Algorithm Data Processing (복합적 자료-알고리즘 자료처리 방식을 적용한 자료처리 시스템 설계 방안 연구)

  • Kim, Min Wook;Park, Yeon Gu;Yi, Jonghyuk;Lee, Jeong-Deok
    • Journal of Satellite, Information and Communications
    • /
    • v.10 no.3
    • /
    • pp.11-15
    • /
    • 2015
  • In this study, we present the architecture design of data control system in water hazard information platform with analyzing the complexity of the data processing. Generally, data control systems in data collection and analysis platforms base on the constant data-algorithm data processing meaning that data processing between data and algorithm is fixed. But the number of data processing in data control system is rapidly increasing because of increasing of complexity of system. To hold down the number of data processing, dynamic data-algorithm data processing is able to be applied to data control system. After comparison each data-algorithm data processing method, we suggest design method of the data control system optimizing water hazard information platform.

A Study of Non-Intrusive Appliance Load Identification Algorithm using Complex Sensor Data Processing Algorithm (복합 센서 데이터 처리 알고리즘을 이용한 비접촉 가전 기기 식별 알고리즘 연구)

  • Chae, Sung-Yoon;Park, Jinhee
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.17 no.2
    • /
    • pp.199-204
    • /
    • 2017
  • In this study, we present a home appliance load identification algorithm. The algorithm utilizes complex sensory data in order to improve the existing NIALM using total power usage information. We define the influence graph between the appliance status and the measured sensor data. The device identification prediction result is calculated as the weighted sum of the predicted value of the sensor data processing algorithm and the predicted value based on the total power usage. We evaluate proposed algorithm to compare appliance identification accuracy with the existing NIALM algorithm.

Strong Uncorrelated Transform Applied to Spatially Distant Channel EEG Data

  • Kim, Youngjoo;Park, Cheolsoo
    • IEIE Transactions on Smart Processing and Computing
    • /
    • v.4 no.2
    • /
    • pp.97-102
    • /
    • 2015
  • In this paper, an extension of the standard common spatial pattern (CSP) algorithm using the strong uncorrelated transform (SUT) is used in order to extract the features for an accurate classification of the left- and right-hand motor imagery tasks. The algorithm is designed to analyze the complex data, which can preserve the additional information of the relationship between the two electroencephalogram (EEG) data from distant channels. This is based on the fact that distant regions of the brain are spatially distributed spatially and related, as in a network. The real-world left- and right-hand motor imagery EEG data was acquired through the Physionet database and the support vector machine (SVM) was used as a classifier to test the proposed method. The results showed that extracting the features of the pair-wise channel data using the strong uncorrelated transform complex common spatial pattern (SUTCCSP) provides a higher classification rate compared to the standard CSP algorithm.

Review of Data-Driven Multivariate and Multiscale Methods

  • Park, Cheolsoo
    • IEIE Transactions on Smart Processing and Computing
    • /
    • v.4 no.2
    • /
    • pp.89-96
    • /
    • 2015
  • In this paper, time-frequency analysis algorithms, empirical mode decomposition and local mean decomposition, are reviewed and their applications to nonlinear and nonstationary real-world data are discussed. In addition, their generic extensions to complex domain are addressed for the analysis of multichannel data. Simulations of these algorithms on synthetic data illustrate the fundamental structure of the algorithms and how they are designed for the analysis of nonlinear and nonstationary data. Applications of the complex version of the algorithms to the synthetic data also demonstrate the benefit of the algorithms for the accurate frequency decomposition of multichannel data.

Study of Efficient Algorithm for Deduplication of Complex Structure (복잡한 구조의 데이터 중복제거를 위한 효율적인 알고리즘 연구)

  • Lee, Hyeopgeon;Kim, Young-Woon;Kim, Ki-Young
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.14 no.1
    • /
    • pp.29-36
    • /
    • 2021
  • The amount of data generated has been growing exponentially, and the complexity of data has been increasing owing to the advancement of information technology (IT). Big data analysts and engineers have therefore been actively conducting research to minimize the analysis targets for faster processing and analysis of big data. Hadoop, which is widely used as a big data platform, provides various processing and analysis functions, including minimization of analysis targets through Hive, which is a subproject of Hadoop. However, Hive uses a vast amount of memory for data deduplication because it is implemented without considering the complexity of data. Therefore, an efficient algorithm has been proposed for data deduplication of complex structures. The performance evaluation results demonstrated that the proposed algorithm reduces the memory usage and data deduplication time by approximately 79% and 0.677%, respectively, compared to Hive. In the future, performance evaluation based on a large number of data nodes is required for a realistic verification of the proposed algorithm.

Acceleration of FFT on a SIMD Processor (SIMD 구조를 갖는 프로세서에서 FFT 연산 가속화)

  • Lee, Juyeong;Hong, Yong-Guen;Lee, Hyunseok
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.52 no.2
    • /
    • pp.97-105
    • /
    • 2015
  • This paper discusses the implementation of Bruun's FFT on a SIMD processor. FFT is an algorithm used in digital signal processing area and its effective processing is important in the enhancement of signal processing performance. Bruun's FFT algorithm is one of fast Fourier transform algorithms based on recursive factorization. Compared to popular Cooley-Tukey algorithm, it is advantageous in computations because most of its operations are based on real number multiplications instead of complex ones. However it shows more complicated data alignment patterns and requires a larger memory for storing coefficient data in its implementation on a SIMD processor. According to our experiment result, in the processing of the FFT with 1024 complex input data on a SIMD processor, The Bruun's algorithm shows approximately 1.2 times higher throughput but uses approximately 4 times more memory (20 Kbyte) than the Cooley-Tukey algorithm. Therefore, in the case with loose constraints on silicon area, the Bruun's algorithm is proper for the processing of FFT on a SIMD processor.

A Low-Complexity 128-Point Mixed-Radix FFT Processor for MB-OFDM UWB Systems

  • Cho, Sang-In;Kang, Kyu-Min
    • ETRI Journal
    • /
    • v.32 no.1
    • /
    • pp.1-10
    • /
    • 2010
  • In this paper, we present a fast Fourier transform (FFT) processor with four parallel data paths for multiband orthogonal frequency-division multiplexing ultra-wideband systems. The proposed 128-point FFT processor employs both a modified radix-$2^4$ algorithm and a radix-$2^3$ algorithm to significantly reduce the numbers of complex constant multipliers and complex booth multipliers. It also employs substructure-sharing multiplication units instead of constant multipliers to efficiently conduct multiplication operations with only addition and shift operations. The proposed FFT processor is implemented and tested using 0.18 ${\mu}m$ CMOS technology with a supply voltage of 1.8 V. The hardware- efficient 128-point FFT processor with four data streams can support a data processing rate of up to 1 Gsample/s while consuming 112 mW. The implementation results show that the proposed 128-point mixed-radix FFT architecture significantly reduces the hardware cost and power consumption in comparison to existing 128-point FFT architectures.

Study on the 3D Modeling Data Conversion Algorithm from 2D Images (2D 이미지에서 3D 모델링 데이터 변환 알고리즘에 관한 연구)

  • Choi, Tea Jun;Lee, Hee Man;Kim, Eung Soo
    • Journal of Korea Multimedia Society
    • /
    • v.19 no.2
    • /
    • pp.479-486
    • /
    • 2016
  • In this paper, the algorithm which can convert a 2D image into a 3D Model will be discussed. The 2D picture drawn by a user is scanned for image processing. The Canny algorithm is employed to find the contour. The waterfront algorithm is proposed to find foreground image area. The foreground area is segmented to decompose the complex shapes into simple shapes. Then, simple segmented foreground image is converted into 3D model to become a complex 3D model. The 3D conversion formular used in this paper is also discussed. The generated 3D model data will be useful for 3D animation and other 3D contents creation.

A Density Peak Clustering Algorithm Based on Information Bottleneck

  • Yongli Liu;Congcong Zhao;Hao Chao
    • Journal of Information Processing Systems
    • /
    • v.19 no.6
    • /
    • pp.778-790
    • /
    • 2023
  • Although density peak clustering can often easily yield excellent results, there is still room for improvement when dealing with complex, high-dimensional datasets. One of the main limitations of this algorithm is its reliance on geometric distance as the sole similarity measurement. To address this limitation, we draw inspiration from the information bottleneck theory, and propose a novel density peak clustering algorithm that incorporates this theory as a similarity measure. Specifically, our algorithm utilizes the joint probability distribution between data objects and feature information, and employs the loss of mutual information as the measurement standard. This approach not only eliminates the potential for subjective error in selecting similarity method, but also enhances performance on datasets with multiple centers and high dimensionality. To evaluate the effectiveness of our algorithm, we conducted experiments using ten carefully selected datasets and compared the results with three other algorithms. The experimental results demonstrate that our information bottleneck-based density peaks clustering (IBDPC) algorithm consistently achieves high levels of accuracy, highlighting its potential as a valuable tool for data clustering tasks.

Inspection of guided missiles applied with parallel processing algorithm (병렬처리 알고리즘 적용 유도탄 점검)

  • Jung, Eui-Jae;Koh, Sang-Hoon;Lee, You-Sang;Kim, Young-Sung
    • Journal of Advanced Navigation Technology
    • /
    • v.25 no.4
    • /
    • pp.293-298
    • /
    • 2021
  • In general, the guided weapon seeker and the guided control device process the target, search, recognition, and capture information to indicate the state of the guided missile, and play a role in controlling the operation and control of the guided weapon. The signals required for guided weapons are gaze change rate, visual signal, and end-stage fuselage orientation signal. In order to process the complex and difficult-to-process missile signals of recent missiles in real time, it is necessary to increase the data processing speed of the missiles. This study showed the processing speed after applying the stop and go and inverse enumeration algorithm among the parallel algorithm methods of PINQ and comparing the processing speed of the signal data required for the guided missile in real time using the guided missile inspection program. Based on the derived data processing results, we propose an effective method for processing missile data when applying a parallel processing algorithm by comparing the processing speed of the multi-core processing method and the single-core processing method, and the CPU core utilization rate.