• Title/Summary/Keyword: Algorithm decomposition

Search Result 789, Processing Time 0.031 seconds

An Efficient Array Algorithm for VLSI Implementation of Vector-radix 2-D Fast Discrete Cosine Transform (Vector-radix 2차원 고속 DCT의 VLSI 구현을 위한 효율적인 어레이 알고리듬)

  • 신경욱;전흥우;강용섬
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.18 no.12
    • /
    • pp.1970-1982
    • /
    • 1993
  • This paper describes an efficient array algorithm for parallel computation of vector-radix two-dimensional (2-D) fast discrete cosine transform (VR-FCT), and its VLSI implementation. By mapping the 2-D VR-FCT onto a 2-D array of processing elements (PEs), the butterfly structure of the VR-FCT can be efficiently importanted with high concurrency and local communication geometry. The proposed array algorithm features architectural modularity, regularity and locality, so that it is very suitable for VLSI realization. Also, no transposition memory is required, which is invitable in the conventional row-column decomposition approach. It has the time complexity of O(N+Nnzp-log2N) for (N*N) 2-D DCT, where Nnzd is the number of non-zero digits in canonic-signed digit(CSD) code, By adopting the CSD arithmetic in circuit desine, the number of addition is reduced by about 30%, as compared to the 2`s complement arithmetic. The computational accuracy analysis for finite wordlength processing is presented. From simulation result, it is estimated that (8*8) 2-D DCT (with Nnzp=4) can be computed in about 0.88 sec at 50 MHz clock frequency, resulting in the throughput rate of about 72 Mega pixels per second.

  • PDF

Joint Demosaicking and Arbitrary-ratio Down Sampling Algorithm for Color Filter Array Image (컬러 필터 어레이 영상에 대한 공동의 컬러보간과 임의 배율 다운샘플링 알고리즘)

  • Lee, Min Seok;Kang, Moon Gi
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.54 no.4
    • /
    • pp.68-74
    • /
    • 2017
  • This paper presents a joint demosaicking and arbitrary-ratio down sampling algorithm for color filter array (CFA) images. Color demosaiking is a necessary part of image signal processing pipeline for many types of digital image recording system using single sensor. Also, such as smart phone, obtained high resolution image from image sensor has to be down-sampled to be displayed on the screen. The conventional solution is "Demosaicking first and down sampling later". However, this scheme requires a significant amount of memory and computational cost. Also, artifacts can be introduced or details get damaged during demosaicking and down sampling process. In this paper, we propose a method in which demosaicking and down sampling are working simultaneously. We use inverse mapping of Bayer CFA and then joint demosaicking and down sampling with arbitrary-ratio scheme based on signal decomposition of high and low frequency component in input data. Experimental results show that our proposed algorithm has better image quality performance and much less computational cost than those of conventional solution.

Investigation of Leksell GammaPlan's ability for target localizations in Gamma Knife Subthalamotomy (감마나이프 시상하핵파괴술에서 목표물 위치측정을 위한 렉셀 감마플랜 능력의 조사)

  • Hur, Beong Ik
    • Journal of the Korean Society of Radiology
    • /
    • v.13 no.7
    • /
    • pp.901-907
    • /
    • 2019
  • The aim of this study is to evaluate the ability of target localizations of Leksell GammaPlan(LGP) in Gamma Knife Subthalamotomy(or Pallidotomy, Thalamotomy) of functional diseases. To evaluate the accuracy of LGP's location settings, the difference Δr of the target coordinates calculated by LGP (or LSP) and author's algorithm was reviewed for 10 patients who underwent Deep Brain Stimulation(DBS) surgery. Δr ranged from 0.0244663 mm to 0.107961 mm. The average of Δr was 0.054398 mm. Transformation matrix between stereotactic space and brain atlas space was calculated using PseudoInverse or Singular Value Decomposition of Mathematica to determine the positional relationship between two coordinate systems. Despite the precise frame positioning, the misalignment of yaw from -3.44739 degree to 1.82243 degree, pitch from -4.57212 degree to 0.692063 degree, and rolls from -6.38239 degree to 7.21426 degree appeared. In conclusion, a simple in-house algorithm was used to test the accuracy for location settings of LGP(or LSP) in Gamma Knife platform and the possibility for Gamma Knife Subthalamotomy. The functional diseases can be treated with Gamma Knife Radiosurgery with safety and efficacy. In the future, the proposed algorithm for target localizations' QA will be a great contributor to movement disorders' treatment of several Gamma Knife Centers.

Research of Computational Thinking based on Analyzed in Each Major Learner (계열별 학습자 분석 기반의 컴퓨팅사고력 연구)

  • Kwon, Jungin
    • The Journal of Society for e-Business Studies
    • /
    • v.24 no.4
    • /
    • pp.17-30
    • /
    • 2019
  • The rapid development of a software-core society emphasizes the importance of software competence as a basic condition for all academic disciplines. The purpose of this study is to investigate the difference of perceptions among students of basic software education which is currently being conducted in university. The results of applying the nine core elements of Computational Thinking for Problem Solving to the learners of the each majors are as follows. In humanities, learners mainly applied the elements of Data Collection, Problem Decomposition and Automation. On the other hand, natural science department learners mainly applied the elements of Data Analysis, Algorithm and Automation. In addition, arts learners mainly applied elements of Data Representation, Abstraction, and Automation. To apply Computational Thinking to the development of software, humanities learners mainly applied elements of Data Collection, Algorithm, Automation. On the other hand, natural science department learners mainly applied the elements of Data Analysis, Algorithm and Automation. In addition, arts learners mainly applied elements of Data Representation, Abstraction, and Automation. Based on the results of this study, it is expected that the educational effectiveness of the learner will be maximized by including the learner analysis with each majors in the design of the basic software curriculum that each university is conducting.

Top-down Hierarchical Clustering using Multidimensional Indexes (다차원 색인을 이용한 하향식 계층 클러스터링)

  • Hwang, Jae-Jun;Mun, Yang-Se;Hwang, Gyu-Yeong
    • Journal of KIISE:Databases
    • /
    • v.29 no.5
    • /
    • pp.367-380
    • /
    • 2002
  • Due to recent increase in applications requiring huge amount of data such as spatial data analysis and image analysis, clustering on large databases has been actively studied. In a hierarchical clustering method, a tree representing hierarchical decomposition of the database is first created, and then, used for efficient clustering. Existing hierarchical clustering methods mainly adopted the bottom-up approach, which creates a tree from the bottom to the topmost level of the hierarchy. These bottom-up methods require at least one scan over the entire database in order to build the tree and need to search most nodes of the tree since the clustering algorithm starts from the leaf level. In this paper, we propose a novel top-down hierarchical clustering method that uses multidimensional indexes that are already maintained in most database applications. Generally, multidimensional indexes have the clustering property storing similar objects in the same (or adjacent) data pares. Using this property we can find adjacent objects without calculating distances among them. We first formally define the cluster based on the density of objects. For the definition, we propose the concept of the region contrast partition based on the density of the region. To speed up the clustering algorithm, we use the branch-and-bound algorithm. We propose the bounds and formally prove their correctness. Experimental results show that the proposed method is at least as effective in quality of clustering as BIRCH, a bottom-up hierarchical clustering method, while reducing the number of page accesses by up to 26~187 times depending on the size of the database. As a result, we believe that the proposed method significantly improves the clustering performance in large databases and is practically usable in various database applications.

A binary adaptive arithmetic coding algorithm based on adaptive symbol changes for lossless medical image compression (무손실 의료 영상 압축을 위한 적응적 심볼 교환에 기반을 둔 이진 적응 산술 부호화 방법)

  • 지창우;박성한
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.22 no.12
    • /
    • pp.2714-2726
    • /
    • 1997
  • In this paper, adaptive symbol changes-based medical image compression method is presented. First, the differenctial image domain is obtained using the differentiation rules or obaptive predictors applied to original mdeical image. Also, the algorithm determines the context associated with the differential image from the domain. Then prediction symbols which are thought tobe the most probable differential image values are maintained at a high value through the adaptive symbol changes procedure based on estimates of the symbols with polarity coincidence between the differential image values to be coded under to context and differential image values in the model template. At the coding step, the differential image values are encoded as "predicted" or "non-predicted" by the binary adaptive arithmetic encoder, where a binary decision tree is employed. The simlation results indicate that the prediction hit ratios of differential image values using the proposed algorithm improve the coding gain by 25% and 23% than arithmetic coder with ISO JPEG lossless predictor and arithmetic coder with differentiation rules or adaptive predictors, respectively. It can be used in compression part of medical PACS because the proposed method allows the encoder be directly applied to the full bit-planes medical image without a decomposition of the full bit-plane into a series of binary bit-planes as well as lower complexity of encoder through using an additions when sub-dividing recursively unit intervals.

  • PDF

A Fast Algorithm for Computing Multiplicative Inverses in GF(2$^{m}$) using Factorization Formula and Normal Basis (인수분해 공식과 정규기저를 이용한 GF(2$^{m}$ ) 상의 고속 곱셈 역원 연산 알고리즘)

  • 장용희;권용진
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.30 no.5_6
    • /
    • pp.324-329
    • /
    • 2003
  • The public-key cryptosystems such as Diffie-Hellman Key Distribution and Elliptical Curve Cryptosystems are built on the basis of the operations defined in GF(2$^{m}$ ):addition, subtraction, multiplication and multiplicative inversion. It is important that these operations should be computed at high speed in order to implement these cryptosystems efficiently. Among those operations, as being the most time-consuming, multiplicative inversion has become the object of lots of investigation Formant's theorem says $\beta$$^{-1}$ =$\beta$$^{2}$sup m/-2/, where $\beta$$^{-1}$ is the multiplicative inverse of $\beta$$\in$GF(2$^{m}$ ). Therefore, to compute the multiplicative inverse of arbitrary elements of GF(2$^{m}$ ), it is most important to reduce the number of times of multiplication by decomposing 2$^{m}$ -2 efficiently. Among many algorithms relevant to the subject, the algorithm proposed by Itoh and Tsujii[2] has reduced the required number of times of multiplication to O(log m) by using normal basis. Furthermore, a few papers have presented algorithms improving the Itoh and Tsujii's. However they have some demerits such as complicated decomposition processes[3,5]. In this paper, in the case of 2$^{m}$ -2, which is mainly used in practical applications, an efficient algorithm is proposed for computing the multiplicative inverse at high speed by using both the factorization formula x$^3$-y$^3$=(x-y)(x$^2$+xy+y$^2$) and normal basis. The number of times of multiplication of the algorithm is smaller than that of the algorithm proposed by Itoh and Tsujii. Also the algorithm decomposes 2$^{m}$ -2 more simply than other proposed algorithms.

Basic Research for the Recognition Algorithm of Tongue Coatings for Implementing a Digital Automatic Diagnosis System (디지털 자동 설진 시스템 구축을 위한 설태 인식 알고리즘 기초 연구)

  • Kim, Keun-Ho;Ryu, Hyun-Hee;Kim, Jong-Yeol
    • Journal of Physiology & Pathology in Korean Medicine
    • /
    • v.23 no.1
    • /
    • pp.97-103
    • /
    • 2009
  • The status and the property of a tongue are the important indicators to diagnose one's health like physiological and clinicopathological changes of inner organs. However, the tongue diagnosis is affected by examination circumstances like a light source, patient's posture, and doctor's condition. To develop an automatic tongue diagnosis system for an objective and standardized diagnosis, classifying tongue coating is inevitable but difficult since the features like color and texture of the tongue coatings and substance have little difference, especially in the neighborhood on the tongue surface. The proposed method has two procedures; the first is to acquire the color table to classify tongue coatings and substance by automatically separating coating regions marked by oriental medical doctors, decomposing the color components of the region into hue, saturation and brightness and obtaining the 2nd order discriminant with statistical data of hue and saturation corresponding to each kind of tongue coatings, and the other is to apply the tongue region in an input image to the color table, resulting in separating the regions of tongue coatings and classifying them automatically. As a result, kinds of tongue coatings and substance were segmented from a face image corresponding to regions marked by oriental medical doctors and the color table for classification took hue and saturation values as inputs and produced the classification of the values into white coating, yellow coating and substance in a digital tongue diagnosis system. The coating regions classified by the proposed method were almost the same to the marked regions. The exactness of classification was 83%, which is the degree of correspondence between what Oriental medical doctors diagnosed and what the proposed method classified. Since the classified regions provide effective information, the proposed method can be used to make an objective and standardized diagnosis and applied to an ubiquitous healthcare system. Therefore, the method will be able to be widely used in Oriental medicine.

Developing Lessons and Rubrics to Promote Computational Thinking (Computational Thinking역량 계발을 위한 수업 설계 및 평가 루브릭 개발)

  • Choi, Hyungshin
    • Journal of The Korean Association of Information Education
    • /
    • v.18 no.1
    • /
    • pp.57-64
    • /
    • 2014
  • This study aims to suggest lesson plans and evaluation methods for primary pre-service teachers by reviewing the concept of computational thinking(CT) skills and its sub components. To pursue this goal, a literature review has been conducted in regards to CT and the effectiveness of programming courses. In addition, the Scratch educational programming functions were analyzed yielding six CT elements(data representation, problem decomposition, abstraction, algorithm & procedures, parallelization, simulation). With these six elements, one semester lesson plans for 15 weeks that represent the connections with six CT elements were designed. Based on the PECT(Progression of Early Computational Thinking) model and the CT framework a rubric to evaluate learners' proficiency levels(basic, developing, proficient) revealed in their final projects was developed as well. Upon a follow-up empirical study, the lesson plans and the rubric suggested in the current study are expected to be utilized in teachers' colleges.

A Study on Path Analysis Between Elementary School Students' Computational Thinking Components (초등학생의 컴퓨팅 사고력 구성요소 간의 경로 분석 연구)

  • Lee, Jaeho;Jang, Junhyung
    • Journal of The Korean Association of Information Education
    • /
    • v.24 no.2
    • /
    • pp.139-146
    • /
    • 2020
  • There is a hot debate about what the core competencies of future generations, who have to live an uncertain future, should cultivate. The future society is expected to become a Software-oriented Society driven by software. Under these circumstances, interest in software education is exploding around the world, and interest in cultivating computational thinking through software education is also increasing. Also, discussions about what computational thinking is and what competence factors are made up are in progress. However, the research on the relationship between the competence factors of computational thinking is relatively insufficient. In order to solve this problem, this study proceeded as follows. First, five competence factors of computational thinking were selected. Second, we defined a path model to analyze the relationships among the competence factors of computational thinking. Third, we chose a test tool to test computational thinking. Fourth, the computational thinking tests were conducted for 801 students in grades 3 through 6 of elementary school. Fifth, implications were derived by analyzing various viewpoints based on the results of the computational thinking test.