• Title/Summary/Keyword: 시간적 중복

Search Result 513, Processing Time 0.036 seconds

Task Duplication Scheduling to improve Communication Time in Distributed Real-Time Systems (분산 실시간 시스템에서 통신시간 개선을 위한 타스크 중복 스케줄링)

  • 박미경;김창수
    • Proceedings of the Korea Multimedia Society Conference
    • /
    • 1998.04a
    • /
    • pp.376-381
    • /
    • 1998
  • 다른 지역에 존재하는 자원이나 데이터들을 이용가능하게 하고, 지정된 마감시간내에 결과를 제공해야 하는 시간적 특성을 가진 분산 실시간 시스템의 성능과 신뢰성을 향상시킬 수 있는 장점을 가진다. 이러한 시스템에서 수행되는 타스크는 크게 주기적 타스크와 비주기적 타스크로 나누어지는데, 빠른 수행시간을 위해 대부분의 타스크들은 병렬로 처리되기 위해 여러 개의 서브 타스크들로 분할되어 실행된다. 본 연구에서는 분산 실시간 환경에서 임의의 시간에 마감시간을 가지고 도착한 주기적 타스크에 서브 타스크의 유형에 따라 서브 타스크간의 통신시간과 수행시간을 고려한 EST(Earliest Start Time)기법을 이용하여 서브 타스크들의 효율적인 마감시간 할당 알고리즘과 ITC(Inter Task Communication)시간을 개선하기 위한 처리기 중복 할당 알고리즘을 제시하고 있다. 수행된 결과는 기존의 방법과 비교하여 타스크 전체의 마감시간 위반 최소화와 처리기의 이용률 개선 및 처리기간의 통신시간과 수행 완료시간을 개선하고 있다.

  • PDF

Management Strategy of Hotspot Temporal Data using Minimum Overlap (최소 중복을 이용한 Hotspot 시간 데이터의 관리)

  • Kang, Ji-Hyung;Yun, Hong-Won
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • v.9 no.1
    • /
    • pp.196-199
    • /
    • 2005
  • We propose a strategy to manage temporal data which are occurred on scientific applications. Firstly, We define LB and RB to separate temporal data, and entity versions to be stored in past, current, future segments. Also, We describe an algorithm to migrate temporal data with hotspot distribution among segments. The performance evaluation of average response time and space utilization is conducted. Average response time between two methods is similar, and spare is saved in proposed method.

  • PDF

Fast Generation of 3-D Video Holograms using a Look-up Table and Temporal Redundancy of 3-D Video Image (룩업테이블과 3차원 동영상의 시간적 중복성을 이용한 3차원 비디오 홀로그램의 고속 생성)

  • Kim, Seung-Cheol;Kim, Eun-Soo
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.34 no.10B
    • /
    • pp.1076-1085
    • /
    • 2009
  • In this paper, a new method for efficient computation of CGH patterns for 3-D video images is proposed by combined use of temporal redundancy and look-up table techniques. In the conventional N-LT method, fringe patterns for other object points on that image plane can be obtained by simply shifting these pre-calculated PFP (Principle Fringe Patterns). But there have been many practical limitations in real-time generation of 3-D video holograms because the computation time required for the generation of 3-D video holograms must be massively increased compared to that of the static holograms. On the other hand, as ordinary 3-D moving pictures have numerous similarities between video frames, called by temporal redundancy, and this redundancy is used to compress the video image. Therefore, in this paper, we proposed the efficient hologram generation method using the temporal redundancy of 3-D video image and N-LT method. To confirm the feasibility of the proposed method, some experiments with test 3-D videos are carried out, and the results are comparatively discussed with the conventional methods in terms of the number of object points and computation time.

A Perceptual Audio Coder Based on Temporal-Spectral Structure (시간-주파수 구조에 근거한 지각적 오디오 부호화기)

  • 김기수;서호선;이준용;윤대희
    • Journal of Broadcast Engineering
    • /
    • v.1 no.1
    • /
    • pp.67-73
    • /
    • 1996
  • In general, the high quality audio coding(HQAC) has the structure of the convertional data compression techniques combined with moodels of human perception. The primary auditory characteristic applied to HQAC is the masking effect in the spectral domain. Therefore spectral techniques such as the subband coding or the transform coding are widely used[1][2]. However no effort has yet been made to apply the temporal masking effect and temporal redundancy removing method in HQAC. The audio data compression method proposed in this paper eliminates statistical and perceptual redundancies in both temporal and spectral domain. Transformed audio signal is divided into packets, which consist of 6 frames. A packet contains 1536 samples($256{\times}6$) :nd redundancies in packet reside in both temporal and spectral domain. Both redundancies are elminated at the same time in each packet. The psychoacoustic model has been improved to give more delicate results by taking into account temporal masking as well as fine spectral masking. For quantization, each packet is divided into subblocks designed to have an analogy with the nonlinear critical bands and to reflect the temporal auditory characteristics. Consequently, high quality of reconstructed audio is conserved at low bit-rates.

  • PDF

Education Content of Department of Dental Hygiene andActual Condition of the Overlapping Analytic Syllabus (치위생과 교육내용 및 교수요목 중복실태 분석)

  • Park, Myung-Suk;Kim, Chang-Hee
    • Journal of dental hygiene science
    • /
    • v.7 no.1
    • /
    • pp.49-54
    • /
    • 2007
  • This research was conducted to provide standardization method for new dental hygiene curriculum by identifying the overlapping of education content of the Department of Dental Hygiene and analytic syllabus. To complement these overlapping education programs, I would like to make some proposals. First, unified course shall be provided by compromising specific terms of overlapping subject, overlapping curriculums for the necessary skills required for job analysis of dental hygienist and overlapping class time. This shall increase the efficiency the class time and required curriculums. Next, proactive and continuos research for standardized approach to the Department of Dental Hygiene education content is necessary and Dental Hygiene academic circle shall build trust.

  • PDF

Adaptive Replicated Object with for Cache Coherence in Distributed Shared Memory (분산 공유 메모리 내에서 적응적 중복 객체에 의한 캐쉬 일관성)

  • 장재열;이병관
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2000.04a
    • /
    • pp.133-135
    • /
    • 2000
  • 분산 공유 메모리 상에서 클라이언트들은 네트워크를 통해 원격 공유 메모리 상으로 접근하게 된다. 접근 시에 클라이언트들은 접근 정보를 자신의 지역 캐쉬에 저장해 두었다가 필요시에 인출해서 사용한다. 그러나 시간이 경과함에 따라 다른 클라이언트들에 의해서 데이터 갱신이 이루어질 수 있다. 이에 본 논문에서는 원격 데이터 정보를 객체로 설정하여 이 객체를 관리하여 분산 공유 메모리 상에서 데이터 일관성을 유지하고자 한다. 객체 중복을 통해서 분산 객체 시스템을 구성하였을 때 기존의 중복 기법에서 사용하는 일관성 비용 이외에 별도의 추가 비용이 없이도 제한적으로 병렬 수행의 효과를 볼 수 있다. 또한 중복 기법에 있어서 가장 큰 오버헤드로 알려진 일관성 유지비용을 최소화시키기 위하여 이 비용을 결정하는 가장 핵심저인 요소인 객체의 복사본의 수를 동시적으로 변화시키면서 관리함으로써 전체 수행 시간의 측면에서 많은 향상을 가져왔다.

  • PDF

An Adaptive Block Matching Algorithm based on Temporal Correlation (시간적 상관성을 이용한 적응적 블록 정합 알고리즘)

  • Yun, Hyo-Sun;Lee, Guee-Sang
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2001.10a
    • /
    • pp.797-800
    • /
    • 2001
  • 영상 압축 분야에서 데이터의 압축이 필수적인데, 이때 가장 많은 중복성을 가지고 있는 시간적 중복성은 이전 프레임의 데이터를 이용하여 움직임 추정과 움직임 보상을 수행하고 추정된 움직임 벡터에 의해서 보상된 영상과 원 영상과의 차 신호를 부호화하여 데이터를 압축한다. 움직임 추정과 움직임 보상 기법은 비디오 영상압축에서 중요한 역할을 하지만 많은 계산량으로 인하여 실시간 응용 및 고해상도 응용에 많은 어려움을 가지고 있다. 만일 움직임 추정을 하기 전에 블록의 움직임을 예측할 수 있다면 이를 바탕으로 탐색 영역에서 초기 탐색점의 위치 및 탐색 패턴을 결정찬 수 있다. 본 논문에서는 움직임의 높은 시간적 상관성을 이용하여 초기 탐색점의 위치와 탐색 패턴을 결정함으로써 적응적으로 움직임 추정하는 새로운 기법을 제안하고 성능을 평가한다. 실험을 통하여 제안된 알고리즘은 계산량의 감소에 있어서 높은 성능 향상을 보였다.

  • PDF

A Fast Normalized Cross-Correlation Computation for WSOLA-based Speech Time-Scale Modification (WSOLA 기반의 음성 시간축 변환을 위한 고속의 정규상호상관도 계산)

  • Lim, Sangjun;Kim, Hyung Soon
    • The Journal of the Acoustical Society of Korea
    • /
    • v.31 no.7
    • /
    • pp.427-434
    • /
    • 2012
  • The overlap-add technique based on waveform similarity (WSOLA) method is known to be an efficient high-quality algorithm for time scaling of speech signal. The computational load of WSOLA is concentrated on the repeated normalized cross-correlation (NCC) calculation to evaluate the similarity between two signal waveforms. To reduce the computational complexity of WSOLA, this paper proposes a fast NCC computation method, in which NCC is obtained through pre-calculated sum tables to eliminate redundancy of repeated NCC calculations in the adjacent regions. While the denominator part of NCC has much redundancy irrespective of the time-scale factor, the numerator part of NCC has less redundancy and the amount of redundancy is dependent on both the time-scale factor and optimal shift value, thereby requiring more sophisticated algorithm for fast computation. The simulation results show that the proposed method reduces about 40%, 47% and 52% of the WSOLA execution time for the time-scale compression, 2 and 3 times time-scale expansions, respectively, while maintaining exactly the same speech quality of the conventional WSOLA.

Management Strategy of Hotspot Temporal Data using Minimum Overlap (최소 중복을 이용한 Hotspot 시간 데이터의 관리)

  • Yun Hong-won;Lee Jung-hwa
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.9 no.4
    • /
    • pp.877-882
    • /
    • 2005
  • We propose a strategy to manage temporal data which are occurred on scientific applications. Firstly, We define LB and RB to separate temporal data, and entity versions to be stored in past, current, future segments. Also, We describe an algorithm to migrate temporal data with hotspot distribution among segments. The performance evaluation of average response time and space utilization is conducted. Average response time between two methods is similar, and space is saved in proposed method.

Distributed data deduplication technique using similarity based clustering and multi-layer bloom filter (SDS 환경의 유사도 기반 클러스터링 및 다중 계층 블룸필터를 활용한 분산 중복제거 기법)

  • Yoon, Dabin;Kim, Deok-Hwan
    • The Journal of Korean Institute of Next Generation Computing
    • /
    • v.14 no.5
    • /
    • pp.60-70
    • /
    • 2018
  • A software defined storage (SDS) is being deployed in cloud environment to allow multiple users to virtualize physical servers, but a solution for optimizing space efficiency with limited physical resources is needed. In the conventional data deduplication system, it is difficult to deduplicate redundant data uploaded to distributed storages. In this paper, we propose a distributed deduplication method using similarity-based clustering and multi-layer bloom filter. Rabin hash is applied to determine the degree of similarity between virtual machine servers and cluster similar virtual machines. Therefore, it improves the performance compared to deduplication efficiency for individual storage nodes. In addition, a multi-layer bloom filter incorporated into the deduplication process to shorten processing time by reducing the number of the false positives. Experimental results show that the proposed method improves the deduplication ratio by 9% compared to deduplication method using IP address based clusters without any difference in processing time.