• Title/Summary/Keyword: initialization process

Search Result 76, Processing Time 0.021 seconds

A Massively Parallel Algorithm for Fuzzy Vector Quantization (퍼지 벡터 양자화를 위한 대규모 병렬 알고리즘)

  • Huynh, Luong Van;Kim, Cheol-Hong;Kim, Jong-Myon
    • The KIPS Transactions:PartA
    • /
    • v.16A no.6
    • /
    • pp.411-418
    • /
    • 2009
  • Vector quantization algorithm based on fuzzy clustering has been widely used in the field of data compression since the use of fuzzy clustering analysis in the early stages of a vector quantization process can make this process less sensitive to its initialization. However, the process of fuzzy clustering is computationally very intensive because of its complex framework for the quantitative formulation of the uncertainty involved in the training vector space. To overcome the computational burden of the process, this paper introduces an array architecture for the implementation of fuzzy vector quantization (FVQ). The arrayarchitecture, which consists of 4,096 processing elements (PEs), provides a computationally efficient solution by employing an effective vector assignment strategy during the clustering process. Experimental results indicatethat the proposed parallel implementation providessignificantly greater performance and efficiency than appropriately scaled alternative array systems. In addition, the proposed parallel implementation provides 1000x greater performance and 100x higher energy efficiency than other implementations using today's ARMand TI DSP processors in the same 130nm technology. These results demonstrate that the proposed parallel implementation shows the potential for improved performance and energy efficiency.

Enabling Environment for Participation in Information Storage Media Export and Digital Evidence Search Process using IPA (정보저장매체 반출 및 디지털 증거탐색 과정에서의 참여권 보장 환경에 대한 중요도-이행도 분석)

  • Yang, Sang Hee;Lee, Choong C.;Yun, Haejung
    • The Journal of Society for e-Business Studies
    • /
    • v.23 no.3
    • /
    • pp.129-143
    • /
    • 2018
  • Recently, the use of digital media such as computers and smart devices has been rapidly increasing, The vast and diverse information contained in the warrant of the investigating agency also includes the one irrelevant to the crime. Therefore, when confiscating the information, the basic rights, defense rights and privacy invasion of the person to be seized have been the center of criticism. Although the investigation agency guarantees the right to participate, it does not have specific guidelines, so they are various by the contexts and environments. In this process, the abuse of the participation right is detrimental to the speed and integrity of the investigation, and there is a side effect that the digital evidence might be destroyed by remote initialization. In this study, we conducted surveys of digital evidence analysts across the country based on four domains and thirty measurement items for enabling environment for participation in information storage media export and digital evidence search process. The difference between the level of importance and the performance was analyzed by the IPA matrix based on process, location, people, and technology dimensions. Seven items belonging to "concentrate here" area are one process-related, three location-related, and three people-related items. This study is meaningful to be a basis for establishing the proper policies and strategies for ensuring participation right, as well as for minimizing the side effects.

Robust 2D Feature Tracking in Long Video Sequences (긴 비디오 프레임들에서의 강건한 2차원 특징점 추적)

  • Yoon, Jong-Hyun;Park, Jong-Seung
    • The KIPS Transactions:PartB
    • /
    • v.14B no.7
    • /
    • pp.473-480
    • /
    • 2007
  • Feature tracking in video frame sequences has suffered from the instability and the frequent failure of feature matching between two successive frames. In this paper, we propose a robust 2D feature tracking method that is stable to long video sequences. To improve the stability of feature tracking, we predict the spatial movement in the current image frame using the state variables. The predicted current movement is used for the initialization of the search window. By computing the feature similarities in the search window, we refine the current feature positions. Then, the current feature states are updated. This tracking process is repeated for each input frame. To reduce false matches, the outlier rejection stage is also introduced. Experimental results from real video sequences showed that the proposed method performs stable feature tracking for long frame sequences.

An Efficient VLSI Architecture of Deblocking Filter in H.264 Advanced Video Coding (H.264/AVC를 위한 디블록킹 필터의 효율적인 VLSI 구조)

  • Lee, Sung-Man;Park, Tae-Geun
    • Journal of the Institute of Electronics Engineers of Korea SD
    • /
    • v.45 no.7
    • /
    • pp.52-60
    • /
    • 2008
  • The deblocking filter in the H.264/AVC video coding standard helps to reduce the blocking artifacts produced in the decoding process. But it consumes one third of the computational complexity in H.624/AVC decoder, which advocates an efficient design of a hardware accelerator for filtering. This paper proposes an architecture of deblocking filter using two filters and some registers for data reuse. Our architecture improves the throughput and minimize the number of external memory access by increasing data reuse. After initialization, two filters are able to perform filtering operation simultaneously. It takes only 96 clocks to complete filtering for one macroblock. We design and synthesis our architecture using Dongbuanam $0.18{\mu}m$ standard cell library and the maximum clock frequency is 200MHz.

Efficient Power Allocation Algorithm for Wireless Networks (무선망의 효율적 전력 할당 알고리즘)

  • Ahn, Hong-Young
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.16 no.1
    • /
    • pp.103-108
    • /
    • 2016
  • In communication systems the solution of the problem of maximizing the mutual information between the input and output of a channel composed of several subchannels under total power constraint has a waterfilling structure. OFDM and MIMO can be decomposed into parallel subchannels with CSI. Waterfilling solves the problem of optimal power allocation to these subchannels to achieve the rate approaching the channel capacity under total power constraint. In waterfilling, more power is alloted to good channels(high SNR) and less or no power to bad channels to increase the rate of good channels, resulting in channel capacity. Waterfilling finds the exact water level satisfying the power constraint employing an iterative algorithm to estimate and update the water level. In this process computation of partial sums of inverse of square of subchannel gain is repeatedly required. In this paper we reduced the computation time of waterfilling algorithm by replacing the partial sum computation with reference to an array which contains the precomputed partial sums in initialization phase.

A Case Study of Collaboration in Cloud Service Ecosystem: Focus on Cloud Service Brokerage (클라우드 서비스 생태계 내의 협업 사례 연구: 클라우드 서비스 중개업을 중심으로)

  • Kim, Kitae;Kim, Jong Woo
    • Information Systems Review
    • /
    • v.17 no.1
    • /
    • pp.1-18
    • /
    • 2015
  • Recently, the number of available cloud services are increasing dramatically because many IT companies have entered into cloud service market. Due to the reason, cloud service brokers are emerging as agents to solve cloud service selection problems and to support cloud service initialization and maintenance of unskilled cloud service users. In this study, NCloud24 case in South Korea and Right Scale case in the USA are analyzed as representative examples of the collaboration between original cloud service providers and cloud service brokers. The business models of two companies are analyzed using Business Model Canvas. The emergence of cloud service brokers are interpreted as unbundling process of IaaS (Infrastructure-as-a-service) cloud service companies. Based on the comparison with the two companies, we prospect future directions of cloud service brokerage.

Hybrid Simulated Annealing for Data Clustering (데이터 클러스터링을 위한 혼합 시뮬레이티드 어닐링)

  • Kim, Sung-Soo;Baek, Jun-Young;Kang, Beom-Soo
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.40 no.2
    • /
    • pp.92-98
    • /
    • 2017
  • Data clustering determines a group of patterns using similarity measure in a dataset and is one of the most important and difficult technique in data mining. Clustering can be formally considered as a particular kind of NP-hard grouping problem. K-means algorithm which is popular and efficient, is sensitive for initialization and has the possibility to be stuck in local optimum because of hill climbing clustering method. This method is also not computationally feasible in practice, especially for large datasets and large number of clusters. Therefore, we need a robust and efficient clustering algorithm to find the global optimum (not local optimum) especially when much data is collected from many IoT (Internet of Things) devices in these days. The objective of this paper is to propose new Hybrid Simulated Annealing (HSA) which is combined simulated annealing with K-means for non-hierarchical clustering of big data. Simulated annealing (SA) is useful for diversified search in large search space and K-means is useful for converged search in predetermined search space. Our proposed method can balance the intensification and diversification to find the global optimal solution in big data clustering. The performance of HSA is validated using Iris, Wine, Glass, and Vowel UCI machine learning repository datasets comparing to previous studies by experiment and analysis. Our proposed KSAK (K-means+SA+K-means) and SAK (SA+K-means) are better than KSA(K-means+SA), SA, and K-means in our simulations. Our method has significantly improved accuracy and efficiency to find the global optimal data clustering solution for complex, real time, and costly data mining process.

A Fast K-means and Fuzzy-c-means Algorithms using Adaptively Initialization (적응적인 초기치 설정을 이용한 Fast K-means 및 Frizzy-c-means 알고리즘)

  • 강지혜;김성수
    • Journal of KIISE:Software and Applications
    • /
    • v.31 no.4
    • /
    • pp.516-524
    • /
    • 2004
  • In this paper, the initial value problem in clustering using K-means or Fuzzy-c-means is considered to reduce the number of iterations. Conventionally the initial values in clustering using K-means or Fuzzy-c-means are chosen randomly, which sometimes brings the results that the process of clustering converges to undesired center points. The choice of intial value has been one of the well-known subjects to be solved. The system of clustering using K-means or Fuzzy-c-means is sensitive to the choice of intial values. As an approach to the problem, the uniform partitioning method is employed to extract the optimal initial point for each clustering of data. Experimental results are presented to demonstrate the superiority of the proposed method, which reduces the number of iterations for the central points of clustering groups.

A Novel RGB Image Steganography Using Simulated Annealing and LCG via LSB

  • Bawaneh, Mohammed J.;Al-Shalabi, Emad Fawzi;Al-Hazaimeh, Obaida M.
    • International Journal of Computer Science & Network Security
    • /
    • v.21 no.1
    • /
    • pp.143-151
    • /
    • 2021
  • The enormous prevalence of transferring official confidential digital documents via the Internet shows the urgent need to deliver confidential messages to the recipient without letting any unauthorized person to know contents of the secret messages or detect there existence . Several Steganography techniques such as the least significant Bit (LSB), Secure Cover Selection (SCS), Discrete Cosine Transform (DCT) and Palette Based (PB) were applied to prevent any intruder from analyzing and getting the secret transferred message. The utilized steganography methods should defiance the challenges of Steganalysis techniques in term of analysis and detection. This paper presents a novel and robust framework for color image steganography that combines Linear Congruential Generator (LCG), simulated annealing (SA), Cesar cryptography and LSB substitution method in one system in order to reduce the objection of Steganalysis and deliver data securely to their destination. SA with the support of LCG finds out the optimal minimum sniffing path inside a cover color image (RGB) then the confidential message will be encrypt and embedded within the RGB image path as a host medium by using Cesar and LSB procedures. Embedding and extraction processes of secret message require a common knowledge between sender and receiver; that knowledge are represented by SA initialization parameters, LCG seed, Cesar key agreement and secret message length. Steganalysis intruder will not understand or detect the secret message inside the host image without the correct knowledge about the manipulation process. The constructed system satisfies the main requirements of image steganography in term of robustness against confidential message extraction, high quality visual appearance, little mean square error (MSE) and high peak signal noise ratio (PSNR).

A Study on the Verification of Integrity of Message Structure in Naval Combat Management System

  • Jung, Yong-Gyu
    • Journal of the Korea Society of Computer and Information
    • /
    • v.27 no.12
    • /
    • pp.209-217
    • /
    • 2022
  • Naval CMS(Combat Management System) is linked to various sensors and weapon equipment and use DDS(Data Distribution Service) for efficient data communication between ICU(Interface Control Unit) Node and IPN(Information Processing Node). In order to use DDS, software in the system communicates in an PUB/SUB(Publication/Subscribe) based on DDS topic. If the DDS messages structure in this PUB/SUB method does not match, problems such as incorrect command processing and wrong information delivery occur in sending and receiving application software. To improve this, this paper proposes a DDS message structure integrity verification method. To improve this, this paper proposes a DDS message structure integrity verification method using a hash tree. To verify the applicability of the proposed method to Naval CMS, the message integrity verification rate of the proposed method was measured, and the integrity verification method was applied to CMS and the initialization time of the existing combat management system was compared and the hash tree generation time of the message structures was measured to understand the effect on the operation and development process of CMS. Through this test, It was confirmed that the message structure verification method for system stability proposed in this paper can be applied to the Naval CMS.