• Title/Summary/Keyword: Vector instruction

Search Result 34, Processing Time 0.022 seconds

Design of a RISC Processor with an Efficient Processing Unit for Multimedia Data (효율적인 멀티미디어데이터 처리를 위한 RISC Processor의 설계)

  • 조태헌;남기훈;김명환;이광엽
    • Proceedings of the IEEK Conference
    • /
    • 2003.07b
    • /
    • pp.867-870
    • /
    • 2003
  • 본 논문은 멀티미디어 데이터 처리를 위한 효율적인 RISC 프로세서 유닛의 설계를 목표로 Vector 프로세서의 SIMD(Single Instruction Multiple Data) 개념을 바탕으로 고정된 연산기 데이터 비트 수에 비해 상대적으로 작은 비트수의 데이터 연산의 부분 병렬화를 통하여 멀티미디어 데이터 연산의 기본이 되는 곱셈누적(MAC : Multiply and Accumulate) 연산의 성능을 향상 시킨다. 또한 기존의 MMX나 VIS 등과 같은 범용 프로세서들의 부분 병렬화를 위해 전 처리 과정의 필요충분조건인 데이터의 연속성을 위해 서로 다른 길이의 데이터 흑은 비트 수가 작은 멀티미디어의 데이터를 하나의 데이터로 재처리 하는 재정렬 혹은 Packing/Unpacking 과정이 성능 전체적인 성능 저하에 작용하게 되므로 본 논문에서는 기존의 프로세서의 연산기 구조를 재이용하여 병렬 곱셈을 위한 연산기 구조를 구현하고 이를 위한 데이터 정렬 연산 구조를 제안한다.

  • PDF

Implementation of Fast HEVC Inverse Transform using AVX2 Instruction Set (AVX2 명령어 집합을 이용한 고속 HEVC 역-변환 구현)

  • Mok, Jung-Soo;Ma, Jonghyun;Ahn, Yong-Jo;Sim, Donggyu
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2015.07a
    • /
    • pp.552-554
    • /
    • 2015
  • 본 논문은 AVX2 (Advanced Vector eXtension 2) 명령어 집합을 이용하여 HEVC (High Efficiency Video Coding) 복호화기의 역-변환 모듈을 고속화하는 방법을 제안한다. AVX2 명령어 집합은 256 비트 레지스터를 사용하여 다수의 데이터를 한번의 명령을 통해 병렬적으로 연산할 수 있으며 반복적인 산술 연산 혹은 논리 연산 구조에서 효율적이다. 제안하는 방법은 AVX2 명령어 집합을 이용하여 $8{\times}8{\sim}32{\times}32$ 크기의 TU (Transform Unit) 단위로 수행되는 역-변환 연산을 행렬의 곱 형태로 연산하여 고속화하였다. 실험 결과 AVX2 명령어 집합을 이용한 역-변환 연산은 Chen 알고리즘에 비해 평균 51% 속도 향상을 보였으며 SSE (Streaming SIMD Extension) 명령어 집합을 이용한 연산에 비해 평균 20%의 속도 향상 결과를 얻을 수 있었다.

  • PDF

Multi-Dimensional Record Scan with SIMD Vector Instructions (SIMD 벡터 명령어를 이용한 다차원 레코드 스캔)

  • Cho, Sung-Ryong;Han, Hwan-Soo;Lee, Sang-Won
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.16 no.6
    • /
    • pp.732-736
    • /
    • 2010
  • Processing a large amount of data becomes more important than ever. Particularly, the information queries which require multi-dimensional record scan can be efficiently implemented with SIMD instruction sets. In this article, we present a SIMD record scan technique which employs row-based scanning. Our technique is different from existing SIMD techniques for predicate processes and aggregate operations. Those techniques apply SIMD instructions to the attributes in the same column of the database, exploiting the column-based record organization of the in-memory database systems. Whereas, our SIMD technique is useful for multi-dimensional record scanning. As the sizes of registers and the memory become larger, our row-based SIMD scan can have bigger impact on the performance. Moreover, since our technique is orthogonal to the parallelization techniques for multi-core processors, it can be applied to both uni-processors and multi-core processors without too many changes in the software architectures.

Random Partial Haar Wavelet Transformation for Single Instruction Multiple Threads (단일 명령 다중 스레드 병렬 플랫폼을 위한 무작위 부분적 Haar 웨이블릿 변환)

  • Park, Taejung
    • Journal of Digital Contents Society
    • /
    • v.16 no.5
    • /
    • pp.805-813
    • /
    • 2015
  • Many researchers expect the compressive sensing and sparse recovery problem can overcome the limitation of conventional digital techniques. However, these new approaches require to solve the l1 norm optimization problems when it comes to signal reconstruction. In the signal reconstruction process, the transform computation by multiplication of a random matrix and a vector consumes considerable computing power. To address this issue, parallel processing is applied to the optimization problems. In particular, due to huge size of original signal, it is hard to store the random matrix directly in memory, which makes one need to design a procedural approach in handling the random matrix. This paper presents a new parallel algorithm to calculate random partial Haar wavelet transform based on Single Instruction Multiple Threads (SIMT) platform.

Instructions and Data Prefetch Mechanism using Displacement History Buffer (변위 히스토리 버퍼를 이용한 명령어 및 데이터 프리페치 기법)

  • Jeong, Yong Su;Kim, JinHyuk;Cho, Tae Hwan;Choi, SangBang
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.52 no.10
    • /
    • pp.82-94
    • /
    • 2015
  • In this paper, we propose hardware prefetch mechanism with an efficient cache replacement policy by giving priority to the trigger block in which a spatial region and producing a spatial region by using the displacement field. It could be taken into account the sequence of the program since a history is based on the trigger block of history record, and it could be quickly prefetching the instructions or data address by adding a stored value to the trigger address and displacement field since a history is stored as a displacement value. Also, we proposed a method of replacing at random by the cache replacement policy from the low priority block when the cache area is full after giving priority to the trigger block. We analyzed using the memory simulator program gem5 and PARSEC benchmark to assess the performance of the hardware prefetcher. As a result, compared to the existing hardware prefecture to generate the spatial region using a bit vector, L1 data cache miss rate was reduced about 44.5% on average and an average of 26.1% of L1 instruction misses occur. In addition, IPC (Instruction Per Cycle) showed an improvement of about 23.7% on average.

Supercomputer's Security Issues and Defense: Survey (슈퍼컴퓨터 보안 이슈 및 대책)

  • Hong, Sunghyuck
    • Journal of Digital Convergence
    • /
    • v.11 no.4
    • /
    • pp.215-220
    • /
    • 2013
  • The super computer calls usually as the super computer in case the computing power of the computer is 20 G flops (GFLOPS) or greater. In the past, the computer equipped with the vector processor (the instrument processing the order having the logic operation and maximum value or minimum value besides the common computer instruction) processing the scientific calculation with the super high speed was installed as the super computer. Recently, cyber attack focuses on supercomputer because if it is being infected, then it will affect hundreds of client PC. Therefore, our research paper analyzed super computer security issues and biometric countermeasure to develop the level of security on super computer.

Efficient Verification Method with Random Vectors for Embedded Control RISC Cores (내장형 제어 RISC코어를 위한 효율적인 랜덤 벡터 기능 검증 방법)

  • Yang, Hun-Mo;Gwak, Seung-Ho;Lee, Mun-Gi
    • Journal of the Institute of Electronics Engineers of Korea SD
    • /
    • v.38 no.10
    • /
    • pp.735-745
    • /
    • 2001
  • Processors require both intensive and extensive functional verification in their design phase due to their general purpose. The proposed random vector verification method for embedded control RISC cores meets this goal by contributing assistance for conventional methods. The proposed method proved its effectiveness during the design of CalmRISCTM-32 developed by Yonsei Univ. and Samsung. It adopts a cycle-accurate instruction level simulator as a reference model, runs simulation in both the reference and the target HDL and reports errors if any difference is found between them. Consequently, it successfully covers errors designers easily pass over and establishes other new error check points.

  • PDF

Performance Characteristics of Subband Adaptive Array Antenna using Kalman Algorithm (Kalman 알고리즘에 의한 대역분할. 합성형 어댑티브 어레이 안테나의 동작 특성)

  • 박재성;오경석;주창복;박남천;정주수
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.3 no.3
    • /
    • pp.501-507
    • /
    • 1999
  • At the mobile unit for adaptation the propagation environment, it is necessity to adapt very fast the weight coefficient vector of adaptive array antenna In this paper, for the BPSK and BFSK signals with S/I=2, S/N=10 subband adaptive array signal processing method to the linear array antenna using the LMS & the Kalman filter algorithm is proposed. For the 4 elements equidistance linear array antenna systems LMS and Kalman algorithms with subband adaptive instruction principles using the subband signal processing method are adopted and the computer simulation results to the constant amplitude envelope signals such as BPSK or BFSK can be seen that the convergence characteristics of directional patterns and the signal following characteristics are more fast and stable.

  • PDF

Cross-architecture Binary Function Similarity Detection based on Composite Feature Model

  • Xiaonan Li;Guimin Zhang;Qingbao Li;Ping Zhang;Zhifeng Chen;Jinjin Liu;Shudan Yue
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.17 no.8
    • /
    • pp.2101-2123
    • /
    • 2023
  • Recent studies have shown that the neural network-based binary code similarity detection technology performs well in vulnerability mining, plagiarism detection, and malicious code analysis. However, existing cross-architecture methods still suffer from insufficient feature characterization and low discrimination accuracy. To address these issues, this paper proposes a cross-architecture binary function similarity detection method based on composite feature model (SDCFM). Firstly, the binary function is converted into vector representation according to the proposed composite feature model, which is composed of instruction statistical features, control flow graph structural features, and application program interface calling behavioral features. Then, the composite features are embedded by the proposed hierarchical embedding network based on a graph neural network. In which, the block-level features and the function-level features are processed separately and finally fused into the embedding. In addition, to make the trained model more accurate and stable, our method utilizes the embeddings of predecessor nodes to modify the node embedding in the iterative updating process of the graph neural network. To assess the effectiveness of composite feature model, we contrast SDCFM with the state of art method on benchmark datasets. The experimental results show that SDCFM has good performance both on the area under the curve in the binary function similarity detection task and the vulnerable candidate function ranking in vulnerability search task.

Fall detection based on acceleration sensor attached to wrist using feature data in frequency space (주파수 공간상의 특징 데이터를 활용한 손목에 부착된 가속도 센서 기반의 낙상 감지)

  • Roh, Jeong Hyun;Kim, Jin Heon
    • Smart Media Journal
    • /
    • v.10 no.3
    • /
    • pp.31-38
    • /
    • 2021
  • It is hard to predict when and where a fall accident will happen. Also, if rapid follow-up measures on it are not performed, a fall accident leads to a threat of life, so studies that can automatically detect a fall accident have become necessary. Among automatic fall-accident detection techniques, a fall detection scheme using an IMU (inertial measurement unit) sensor attached to a wrist is difficult to detect a fall accident due to its movement, but it is recognized as a technique that is easy to wear and has excellent accessibility. To overcome the difficulty in obtaining fall data, this study proposes an algorithm that efficiently learns less data through machine learning such as KNN (k-nearest neighbors) and SVM (support vector machine). In addition, to improve the performance of these mathematical classifiers, this study utilized feature data aquired in the frequency space. The proposed algorithm analyzed the effect by diversifying the parameters of the model and the parameters of the frequency feature extractor through experiments using standard datasets. The proposed algorithm could adequately cope with a realistic problem that fall data are difficult to obtain. Because it is lighter than other classifiers, this algorithm was also easy to implement in small embedded systems where SIMD (single instruction multiple data) processing devices were difficult to mount.