• Title/Summary/Keyword: MD algorithm

Search Result 190, Processing Time 0.028 seconds

Performance Evaluation of the WiMAX Network under a Complete Partitioned User Group with a Traffic Shaping Algorithm

  • Akhter, Jesmin;Islam, Md. Imdadul;Amin, M.R.
    • Journal of Information Processing Systems
    • /
    • v.10 no.4
    • /
    • pp.568-580
    • /
    • 2014
  • To enhance the utilization of the traffic channels of a network (instead of allocating radio channel to an individual user), a channel or a group of channels are allocated to a user group. The idea behind this is the statistical distribution of traffic arrival rates and the service time for an individual user or a group of users. In this paper, we derive the blocking probability and throughput of a subscriber station of Worldwide Interoperability for Microwave Access (WiMAX) by considering both the connection level and packet-level traffic under a complete partition scheme. The main contribution of the paper is to incorporate the traffic shaping scheme onto the incoming turbulent traffic. Hence, we have also analyzed the impact of the drain rate of the buffer on the blocking probability and throughput.

A LT Codec Architecture with an Efficient Degree Generator and New Permutation Technique (효율적인 정도 생성기 및 새로운 순열 기법을 가진 LT 코덱 구조)

  • Hasan, Md. Tariq;Choi, Goang Seog
    • Journal of Korea Society of Digital Industry and Information Management
    • /
    • v.10 no.4
    • /
    • pp.117-125
    • /
    • 2014
  • In this paper, a novel hardware architecture of the LT codec is presented where non-BP based decoding algorithm is applied. Novel LT codec architecture is designed with an efficient degree distribution unit using Verilog HDL. To perform permutation operation, different initial valued or time shifted counters have been used to get pretty well permutations and an effect of randomness. The codec will take 128 bits as input and produce 256 encoded output bits. The simulation results show expected performances as the implemented distribution and the original distribution are pretty same. The proposed LT codec takes 257.5 cycle counts and $2.575{\mu}s$ for encoding and decoding instead of 5,204,861 minimum cycle counts and 4.43s of the design mentioned in the previous works where iterative soft BP decoding was used in ASIC and ASIP implementation of the LT codec.

Design of Secure Information Center Using a Conventional Cryptography

  • Choi, Jun-Hyuk;Kim Tae-Gap;Go, Byung-Do;Ryou, Jae-Cheol
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.6 no.4
    • /
    • pp.53-66
    • /
    • 1996
  • World Wide Web is a total solution for multi-media data transmission on Internet. Because of its characteristics like ease of use, support for multi-media data and smart graphic user interface, WWW has extended to cover all kinds of applications. The Secure Information Center(SIC) is a data transmission system using conventional cryptography between client and server on WWW. It's main function is to support the encryption of sending data. For encryption of data IDEA(International Data Encryption Algorithm) is used and for authentication mechanism MD5 hash function is used. Since Secure Information Center is used by many users, conventional cryptosystem is efficient in managing their secure interactions. However, there are some restrictions on sharing of same key and data transmission between client and server, for example the risk of key exposure and the difficulty of key sharing mechanisms. To solve these problems, the Secure Information Center provides encryption mechanisms and key management policies.

Color Space Based Objects Detection System from Video Sequences

  • Alom, Md. Zahangir;Lee, Hyo Jong
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2011.11a
    • /
    • pp.347-350
    • /
    • 2011
  • This paper propose a statistical color model of background extraction base on Hue-Saturation-Value(HSV) color space, instead of the traditional RGB space, and shows that it provides a better use of the color information. HSV color space corresponds closely to the human perception of color and it has revealed more accuracy to distinguish shadows [3] [4]. The key feature of this segmentation method is based on processing hue component of color in HSV color space on image area. The HSV color model is used, its color components are efficiently analyzed and treated separately so that the proposed algorithm can adapt to different environmental illumination condition and shadows. Polar and linear statistical operations are used to calculate the background from the video frames. The experimental results show that the proposed background subtraction method can automatically segment video objects robustly and accurately in various illuminating and shadow environments.

Accelerating Soft-Decision Reed-Muller Decoding Using a Graphics Processing Unit

  • Uddin, Md. Sharif;Kim, Cheol Hong;Kim, Jong-Myon
    • Asia-pacific Journal of Multimedia Services Convergent with Art, Humanities, and Sociology
    • /
    • v.4 no.2
    • /
    • pp.369-378
    • /
    • 2014
  • The Reed-Muller code is one of the efficient algorithms for multiple bit error correction, however, its high-computation requirement inherent in the decoding process prohibits its use in practical applications. To solve this problem, this paper proposes a graphics processing unit (GPU)-based parallel error control approach using Reed-Muller R(r, m) coding for real-time wireless communication systems. GPU offers a high-throughput parallel computing platform that can achieve the desired high-performance decoding by exploiting massive parallelism inherent in the algorithm. In addition, we compare the performance of the GPU-based approach with the equivalent sequential approach that runs on the traditional CPU. The experimental results indicate that the proposed GPU-based approach exceedingly outperforms the sequential approach in terms of execution time, yielding over 70× speedup.

Accuracy of artificial intelligence-assisted landmark identification in serial lateral cephalograms of Class III patients who underwent orthodontic treatment and two-jaw orthognathic surgery

  • Hong, Mihee;Kim, Inhwan;Cho, Jin-Hyoung;Kang, Kyung-Hwa;Kim, Minji;Kim, Su-Jung;Kim, Yoon-Ji;Sung, Sang-Jin;Kim, Young Ho;Lim, Sung-Hoon;Kim, Namkug;Baek, Seung-Hak
    • The korean journal of orthodontics
    • /
    • v.52 no.4
    • /
    • pp.287-297
    • /
    • 2022
  • Objective: To investigate the pattern of accuracy change in artificial intelligence-assisted landmark identification (LI) using a convolutional neural network (CNN) algorithm in serial lateral cephalograms (Lat-cephs) of Class III (C-III) patients who underwent two-jaw orthognathic surgery. Methods: A total of 3,188 Lat-cephs of C-III patients were allocated into the training and validation sets (3,004 Lat-cephs of 751 patients) and test set (184 Lat-cephs of 46 patients; subdivided into the genioplasty and non-genioplasty groups, n = 23 per group) for LI. Each C-III patient in the test set had four Lat-cephs: initial (T0), pre-surgery (T1, presence of orthodontic brackets [OBs]), post-surgery (T2, presence of OBs and surgical plates and screws [S-PS]), and debonding (T3, presence of S-PS and fixed retainers [FR]). After mean errors of 20 landmarks between human gold standard and the CNN model were calculated, statistical analysis was performed. Results: The total mean error was 1.17 mm without significant difference among the four time-points (T0, 1.20 mm; T1, 1.14 mm; T2, 1.18 mm; T3, 1.15 mm). In comparison of two time-points ([T0, T1] vs. [T2, T3]), ANS, A point, and B point showed an increase in error (p < 0.01, 0.05, 0.01, respectively), while Mx6D and Md6D showeda decrease in error (all p < 0.01). No difference in errors existed at B point, Pogonion, Menton, Md1C, and Md1R between the genioplasty and non-genioplasty groups. Conclusions: The CNN model can be used for LI in serial Lat-cephs despite the presence of OB, S-PS, FR, genioplasty, and bone remodeling.

An efficient transcoding algorithm for AMR and G.723.1 speech coders and performance evaluation (AMR과 G.723.1 음성부호화기를 위한 효율적인 상호부호화 알고리듬 및 성능평가)

  • 최진규;윤성완;강홍구;윤대희
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.41 no.4
    • /
    • pp.121-130
    • /
    • 2004
  • In the application requiring the interoperability of different networks such as VoIP and wireless communication system, two speech codecs must work together with the structure of cascaded connection, tandem. Tandem has several problems such as long delay, high complexity and quality degradation due to twice complete encoding/decoding process. Transcoding is one of the best solutions to solve these problems. Transcoding algorithm is varied with the structure of source and target coder. In this paper, transcoding algorithm including the LSP conversion, the pitch estimation and new perceptual weighting filter for reducing complexity and improving qualify is proposed. These algorithms are applied to the pair of AMR md G.723.1. By employing the proposed algorithms in the transcoder, the complexity is reduced by about 20%-58% and quality is improved compared to tandem.

Adaptive Equalization Algorithm of Enhanced CMA using Minimum Disturbance Technique (최소 Disturbance 기법을 적용한 향상된 CMA 적응 등화 알고리즘)

  • Kang, Dae-Soo
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.14 no.6
    • /
    • pp.55-61
    • /
    • 2014
  • This paper related with the ECMA (Enchanced CMA) algorithm performance which is possible to simultaneously compensation of the amplitude and phase by appling the minimum disturbance techniques in the CMA adatpve equalizer. The ECMA can improving the gradient noise amplification problem, stability and roburstness performance by the minimum disturbance technique that is the minimization of the equalizer tap weight variation in the point of squared euclidiean norm and the decision directed mode, and then the now cost function were proposed in order to simultaneouly compensation of amplitude and phase of the received signal with the minimum increment of computational operations. The performance of ECMA algorithm was compared to present MCMA by the computer simulation. For proving the performance, the recovered signal constellation that is the output of equalizer output signal and the residual isi and Maximum Distortion charateristic and MSE learning curve that are presents the convergence performance in the equalizer and the overall frequency transfer function of channel and equalizer were used. As a result of computer simulation, the ECMA has more better compensation capability of amplitude and phase in the recovered constellation, and the convergence time of adaptive equalization has improved compared to the MCMA.

A Comparison Analysis of Various Approaches to Multidimensional Scaling in Mapping a Knowledge Domain's Intellectual Structure (지적 구조 분석을 위한 MDS 지도 작성 방식의 비교 분석)

  • Lee, Jae-Yun
    • Journal of the Korean Society for Library and Information Science
    • /
    • v.41 no.2
    • /
    • pp.335-357
    • /
    • 2007
  • There has been many studies representing intellectual structures with multidimensional scaling(MDS) However MDS configuration is limited in representing local details and explicit structures. In this paper, we identified two components of MDS mapping approach; one is MDS algorithm and the other is preparation of data matrix. Various combinations of the two components of MDS mapping are compared through some measures of fit. It is revealed that the conventional approach composed of ALSCAL algorithm and Euclidean distance matrix calculated from Pearson's correlation matrix is the worst of the compared MDS mapping approaches. Otherwise the best approach to make MDS map is composed of PROXSCAL algorithm and z-scored Euclidean distance matrix calculated from Pearson's correlation matrix. These results suggest that we could obtain more detailed and explicit map of a knowledge domain through careful considerations on the process of MDS mapping.

High-Reliable Classification of Multiple Induction Motor Faults using Robust Vibration Signatures in Noisy Environments based on a LPC Analysis and an EM Algorithm (LPC 분석 기법 및 EM 알고리즘 기반 잡음 환경에 강인한 진동 특징을 이용한 고 신뢰성 유도 전동기 다중 결함 분류)

  • Kang, Myeongsu;Jang, Won-Chul;Kim, Jong-Myon
    • Journal of the Korea Society of Computer and Information
    • /
    • v.19 no.2
    • /
    • pp.21-30
    • /
    • 2014
  • The use of induction motors has been recently increasing in a variety of industrial sites, and they play a significant role. This has motivated that many researchers have studied on developing fault detection and classification systems of induction motors in order to reduce economical damage caused by their faults. To early identify induction motor faults, this paper effectively estimates spectral envelopes of each induction motor fault by utilizing a linear prediction coding (LPC) analysis technique and an expectation maximization (EM) algorithm. Moreover, this paper classifies induction motor faults into their corresponding categories by calculating Mahalanobis distance using the estimated spectral envelopes and finding the minimum distance. Experimental results show that the proposed approach yields higher classification accuracies than the state-of-the-art conventional approach for both noiseless and noisy environments for identifying the induction motor faults.