• 제목/요약/키워드: data complexity

Search Result 2,414, Processing Time 0.028 seconds

Disign of Non-coherent Demodulator for LR-WPAN Systems (LR-WPAN 시스템을 위한 비동기 복조 알고리즘 및 하드웨어 구조설계)

  • Lee, Dong-Chan;Jang, Soo-Hyun;Jung, Yun-Ho
    • Journal of Advanced Navigation Technology
    • /
    • v.17 no.6
    • /
    • pp.705-711
    • /
    • 2013
  • In this paper, we present a low-complexity non-coherent demodulation algorithm and hardware architecture for LR-WPAN systems which can support the variable data rate for various applications. The need for LR-WPAN systems that can support the variable data rate is increasing due to the emergence of various sensor applications. Since the existing symbol based double correlation (SBDC) algorithm requires the increase of complexity to support the variable data rate, we propose the sample based double correlation (SPDC) algorithm which can be implemented without the increase of complexity. The proposed non-coherent demodulator was designed by verilog HDL and implemented with FPGA prototype board.

Study of Efficient Algorithm for Deduplication of Complex Structure (복잡한 구조의 데이터 중복제거를 위한 효율적인 알고리즘 연구)

  • Lee, Hyeopgeon;Kim, Young-Woon;Kim, Ki-Young
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.14 no.1
    • /
    • pp.29-36
    • /
    • 2021
  • The amount of data generated has been growing exponentially, and the complexity of data has been increasing owing to the advancement of information technology (IT). Big data analysts and engineers have therefore been actively conducting research to minimize the analysis targets for faster processing and analysis of big data. Hadoop, which is widely used as a big data platform, provides various processing and analysis functions, including minimization of analysis targets through Hive, which is a subproject of Hadoop. However, Hive uses a vast amount of memory for data deduplication because it is implemented without considering the complexity of data. Therefore, an efficient algorithm has been proposed for data deduplication of complex structures. The performance evaluation results demonstrated that the proposed algorithm reduces the memory usage and data deduplication time by approximately 79% and 0.677%, respectively, compared to Hive. In the future, performance evaluation based on a large number of data nodes is required for a realistic verification of the proposed algorithm.

Ensuring Data Confidentiality and Privacy in the Cloud using Non-Deterministic Cryptographic Scheme

  • John Kwao Dawson;Frimpong Twum;James Benjamin Hayfron Acquah;Yaw Missah
    • International Journal of Computer Science & Network Security
    • /
    • v.23 no.7
    • /
    • pp.49-60
    • /
    • 2023
  • The amount of data generated by electronic systems through e-commerce, social networks, and data computation has risen. However, the security of data has always been a challenge. The problem is not with the quantity of data but how to secure the data by ensuring its confidentiality and privacy. Though there are several research on cloud data security, this study proposes a security scheme with the lowest execution time. The approach employs a non-linear time complexity to achieve data confidentiality and privacy. A symmetric algorithm dubbed the Non-Deterministic Cryptographic Scheme (NCS) is proposed to address the increased execution time of existing cryptographic schemes. NCS has linear time complexity with a low and unpredicted trend of execution times. It achieves confidentiality and privacy of data on the cloud by converting the plaintext into Ciphertext with a small number of iterations thereby decreasing the execution time but with high security. The algorithm is based on Good Prime Numbers, Linear Congruential Generator (LGC), Sliding Window Algorithm (SWA), and XOR gate. For the implementation in C, thirty different execution times were performed and their average was taken. A comparative analysis of the NCS was performed against AES, DES, and RSA algorithms based on key sizes of 128kb, 256kb, and 512kb using the dataset from Kaggle. The results showed the proposed NCS execution times were lower in comparison to AES, which had better execution time than DES with RSA having the longest. Contrary, to existing knowledge that execution time is relative to data size, the results obtained from the experiment indicated otherwise for the proposed NCS algorithm. With data sizes of 128kb, 256kb, and 512kb, the execution times in milliseconds were 38, 711, and 378 respectively. This validates the NCS as a Non-Deterministic Cryptographic Algorithm. The study findings hence are in support of the argument that data size does not determine the execution.

Metadata-Based Data Structure Analysis to Optimize Search Speed and Memory Efficiency (검색 속도와 메모리 효율 최적화를 위한 메타데이터 기반 데이터 구조 분석)

  • Kim Se Yeon;Lim Young Hoon
    • The Transactions of the Korea Information Processing Society
    • /
    • v.13 no.7
    • /
    • pp.311-318
    • /
    • 2024
  • As the amount of data increases due to the development of artificial intelligence and the Internet, data management is becoming increasingly important, and the efficient utilization of data retrieval and memory space is crucial. In this study, we investigate how to optimize search speed and memory efficiency by analyzing data structure based on metadata. As a research method, we compared and analyzed the performance of the array, association list, dictionary binary tree, and graph data structures using metadata of photographic images, focusing on temporal and space complexity. Through experimentation, it was confirmed that dictionary data structure performs best in collection speed and graph data structure performs best in search speed when dealing with large-scale image data. We expect the results of this paper to provide practical guidelines for selecting data structures to optimize search speed and memory efficiency for the images data.

Comparative Analysis of Centralized Vs. Distributed Locality-based Repository over IoT-Enabled Big Data in Smart Grid Environment

  • Siddiqui, Isma Farah;Abbas, Asad;Lee, Scott Uk-Jin
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2017.01a
    • /
    • pp.75-78
    • /
    • 2017
  • This paper compares operational and network analysis of centralized and distributed repository for big data solutions in the IoT enabled Smart Grid environment. The comparative analysis clearly depicts that centralize repository consumes less memory consumption while distributed locality-based repository reduce network complexity issues than centralize repository in state-of-the-art Big Data Solution.

  • PDF

A Fast Error Concealment Using a Data Hiding Technique and a Robust Error Resilience for Video (데이터 숨김과 오류 내성 기법을 이용한 빠른 비디오 오류 은닉)

  • Kim, Jin-Ok
    • The KIPS Transactions:PartB
    • /
    • v.10B no.2
    • /
    • pp.143-150
    • /
    • 2003
  • Error concealment plays an important role in combating transmission errors. Methods of error concealment which produce better quality are generally of higher complexity, thus making some of the more sophisticated algorithms is not suitable for real-time applications. In this paper, we develop temporal and spatial error resilient video encoding and data hiding approach to facilitate the error concealment at the decoder. Block interleaving scheme is introduced to isolate erroneous blocks caused by packet losses for spatial area of error resilience. For temporal area of error resilience, data hiding is applied to the transmission of parity bits to protect motion vectors. To do error concealment quickly, a set of edge features extracted from a block is embedded imperceptibly using data hiding into the host media and transmitted to decoder. If some part of the media data is damaged during transmission, the embedded features are used for concealment of lost data at decoder. This method decreases a complexity of error concealment by reducing the estimation process of lost data from neighbor blocks. The proposed data hiding method of parity bits and block features is not influence much to the complexity of standard encoder. Experimental results show that proposed method conceals properly and effectively burst errors occurred on transmission channel like Internet.

The Effect of Project Complexity, Team Members' Structure, and Process Index on Efficiency of System Integration Projects

  • Hong, Han-Kuk;Park, Chul-Jae;Leem, Byung-Hak
    • Journal of information and communication convergence engineering
    • /
    • v.6 no.3
    • /
    • pp.323-326
    • /
    • 2008
  • Data Envelopment Analysis (DEA) is a theoretically sound framework for performance analysis that offers many advantages over traditional methods such as performance ratios and regression analysis. Largely the result of multidisciplinary research during the last three decades in economics, engineering and management, DEA is best described as an effective new way of visualizing and analyzing performance data. Besides, overseas information technology companies have aggressively tried to enter the domestic market. In the age of globalization and high competition, it is imperative that the system integration (SI) companies need to introduce the performance evaluation models of SI projects, including Capability Maturity Model and Software Process Improvement and Capability Determination, to gain a competitive advantage. Therefore, it makes our research regarding evaluation of SI projects very opportune. The purpose of the study is not only to evaluate efficiency of each project by DEA but also to gain insight into various factors such as project complexity, team members' man-months structure, and process index(project management index) that link to the projects performance.

A genetic-algorithm-based high-level synthesis for partitioned bus architecture (유전자 알고리즘을 이용한 분할 버스 아키텍처의 상위 수준 합성)

  • 김용주;최기영
    • Journal of the Korean Institute of Telematics and Electronics C
    • /
    • v.34C no.3
    • /
    • pp.1-10
    • /
    • 1997
  • We present an approach to high-level synthesis for a specific target architecture-partitioned bus architecture. In this approach, we have specific goals of minimizing data transfer length and number of buses in addition to common synthesis goals such as minimizing number of control steps and satisfying given resource constraint. Minimizing data transfer length and number of buses can be very important design goals in the era of deep submicron technology in which interconnection delay and area dominate total delay and area of the chip to be designed. in partitioned bus architecture, to get optimal solution satisfying all the goals, partitioning of operation nodes among segments and ordering of segments as well as scheduling and allocation/binding must be considered concurrently. Those additional goals may impose much more complexity on the existing high-level synthesis problem. To cope with this increased complexity and get reasonable results, we have employed two ideas in ur synthesis approach-extension of the target architecture to alleviate bus requirement for data transfer and adoption of genetic algorithm as a principal methodology for design space exploration. Experimental results show that our approach is a promising high-level synthesis mehtodology for partitioned bus architecture.

  • PDF

High efficient 3D vision system using simplification of stereo image rectification structure (스테레오 영상 교정 구조의 간략화를 이용한 고효율 3D 비젼시스템)

  • Kim, Sang Hyun
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.12 no.6
    • /
    • pp.605-611
    • /
    • 2019
  • 3D Vision system has many applications recently but popularization have many problems that need to be overcome. Volumetric display may process a amount of visual data and design the high efficient vision system for display. In case of stereo system for volumetric display, disparity vectors from the stereoscopic sequences and residual images with the reference images has been transmitted, and the reconstructed stereoscopic sequences have been displayed at the receiver. So central issue for the design of efficient volumetric vision system lies in selecting an appropriate stereo matching and robust vision system. In this paper, we propose high efficient vision system with the reduction of rectification error which can perform the 3D data extraction efficiently with low computational complexity. In experimental results with proposed vision system, the proposed method can perform the 3D data extraction efficiently with reducing rectification error and low computational complexity.

A Study on the Use Intention of Xiaomi in Korean Market

  • Jin, Peng-Ru;Lee, Jong-Ho
    • The Journal of Industrial Distribution & Business
    • /
    • v.9 no.11
    • /
    • pp.17-24
    • /
    • 2018
  • Purpose - The portability, functionality, and convenience of smart phones are constantly updated. With the rapid popularization of users of mobile terminals, Xiaomi is also developing rapidly. In February 2015, the users of Xiaomi exceeded 100 million people. As a transnational industry, Xiaomi has developed rapidly in not only China but also Korea. However, through the literature review, there is no radmissible study on the Xiaomi mobile telephones in the Korean market, so it is necessary to study the Xiaomi mobile phones in Korean market. Research design, data, and methodology - Figure analysis of data and social science analytical software of IBM SPSS AMOS 23.0 and IBM Statistics 23.0 were used for all the data researched. Results - First, the innovative diffusion temperament and the compatibility of Xiaomi have positive impacts on achievement expectations and effort expectations. Second, the innovative diffusion temperament and the complexity of Xiaomi have negative impacts on achievement expectations and effort expectations. Third, the innovative diffusion characteristics and the relative superiority of Xiaomi have positive impacts on achievement expectations and effort expectations. Conclusions - Through the analysis of the prior study, the innovation acceptance characteristics consist of compatibility, complexity, relative superiority, observation possibility, and the attempt possibility; the technical acceptance characteristics consist of achievement expectations, effort expectations, social influence, promotion condition, the study conducts relevant research on the continued use intention and analyze the hypothesis of research model.