• Title/Summary/Keyword: multiple blocks

Search Result 258, Processing Time 0.023 seconds

A Study on the 3-Dimensional Analysis by Bundle Adjustment in Close Range Photogrammetry (근접사진측량의 번들조정에 의한 삼차원 위치해석에 관한 연구)

  • 백은기;목찬상
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.6 no.2
    • /
    • pp.10-18
    • /
    • 1988
  • In the three-dimensional analysis and deformation analysis of large structures, efficient is the use of the multiple method of close range photogrammetry which approaches the object distance. This study analyzes the influence of errors according to the overlap, the control points, and the object distance, to solve the problems which are raised in the multiple method. A wall-board, 7 meters by 3 meters, was used as a test field on which a total of 225 unknown points were equally disposed. The photographs with changing the overlap and object distance were taken by P-31 camera system. a total of 143 negatives are used in this study for computing 3-dimensional coordinates and its standard errors, and bundle adjustment of strips and blocks developed with on-line system is applied. In case of decreasing the number of control points, simulation error increases but actual error decreases and increases again. Due to the changed of object distances Z error represents largely compared to X, Y error, but good results in Z can be obtained by increasing the redundancy. And simulation error or actual error shows best results at the endlap of about 70%. To sum up this study, approprate arrangement of control points and overlap is meaningful, and multiple method by short object distance will be widely used to precision and deformation analysis of critical structures.

  • PDF

Co-Writing Multiple Files Based on Directory Locality for High Performance of Small File Writes (디렉토리 지역성을 활용한 작은 파일들의 모아 쓰기 기법)

  • Lee, Kyung-Jae;Ahn, Woo-Hyun;Oh, Jae-Won
    • The KIPS Transactions:PartA
    • /
    • v.15A no.5
    • /
    • pp.275-286
    • /
    • 2008
  • Fast File System(FFS) utilizes large disk bandwidth to improve the write performance of large files. One way to improve the performance is to write multiple blocks of a large file at a single disk I/O through the disk bandwidth. However, rather than disk bandwidth, the performance of small file writes is limited by disk access times significantly impacted by disk movements such as disk seek and rotation because FFS writes each of small files at a single disk write. We propose CW-FFS (Co-Writing Fast File System) to improve the write performance of small files by minimizing the disk movements that are needed to write small files to disks. Its key technique called co-writing scheme is to dynamically collect multiple small files named by a given directory and then write them at a single disk I/O to contiguous disk locations. Co-writing several small files at a single disk I/O reduces multiple disk movements that are needed for small file writes to one single disk movement, thus increasing the overall write performance of write-intensive applications. Furthermore, a file allocation scheme is introduced to prevent co-writing scheme from having a negative impact on disk spatial locality of small files named by a given directory. The measurement of our technique implemented in the OpenBSD 4.0 shows that CW-FFS increases the performance of small file writes over FFS in the range from 5 to 35% in the Postmark benchmark.

Implementation of High-Throughput SHA-1 Hash Algorithm using Multiple Unfolding Technique (다중 언폴딩 기법을 이용한 SHA-1 해쉬 알고리즘 고속 구현)

  • Lee, Eun-Hee;Lee, Je-Hoon;Jang, Young-Jo;Cho, Kyoung-Rok
    • Journal of the Institute of Electronics Engineers of Korea SD
    • /
    • v.47 no.4
    • /
    • pp.41-49
    • /
    • 2010
  • This paper proposes a new high speed SHA-1 architecture using multiple unfolding and pre-computation techniques. We unfolds iterative hash operations to 2 continuos hash stage and reschedules computation timing. Then, the part of critical path is computed at the previous hash operation round and the rest is performed in the present round. These techniques reduce 3 additions to 2 additions on the critical path. It makes the maximum clock frequency of 118 MHz which provides throughput rate of 5.9 Gbps. The proposed architecture shows 26% higher throughput with a 32% smaller hardware size compared to other counterparts. This paper also introduces a analytical model of multiple SHA-1 architecture at the system level that maps a large input data on SHA-1 block in parallel. The model gives us the required number of SHA-1 blocks for a large multimedia data processing that it helps to make decision hardware configuration. The hs fospeed SHA-1 is useful to generate a condensed message and may strengthen the security of mobile communication and internet service.

New in vitro multiple cardiac ion channel screening system for preclinical Torsades de Pointes risk prediction under the Comprehensive in vitro Proarrhythmia Assay concepta

  • Jin Ryeol An;Seo-Yeong Mun;In Kyo Jung;Kwan Soo Kim;Chan Hyeok Kwon;Sun Ok Choi;Won Sun Park
    • The Korean Journal of Physiology and Pharmacology
    • /
    • v.27 no.3
    • /
    • pp.267-275
    • /
    • 2023
  • Cardiotoxicity, particularly drug-induced Torsades de Pointes (TdP), is a concern in drug safety assessment. The recent establishment of human induced pluripotent stem cell-derived cardiomyocytes (human iPSC-CMs) has become an attractive human-based platform for predicting cardiotoxicity. Moreover, electrophysiological assessment of multiple cardiac ion channel blocks is emerging as an important parameter to recapitulate proarrhythmic cardiotoxicity. Therefore, we aimed to establish a novel in vitro multiple cardiac ion channel screening-based method using human iPSC-CMs to predict the drug-induced arrhythmogenic risk. To explain the cellular mechanisms underlying the cardiotoxicity of three representative TdP high- (sotalol), intermediate- (chlorpromazine), and low-risk (mexiletine) drugs, and their effects on the cardiac action potential (AP) waveform and voltage-gated ion channels were explored using human iPSC-CMs. In a proof-of-principle experiment, we investigated the effects of cardioactive channel inhibitors on the electrophysiological profile of human iPSC-CMs before evaluating the cardiotoxicity of these drugs. In human iPSC-CMs, sotalol prolonged the AP duration and reduced the total amplitude (TA) via selective inhibition of IKr and INa currents, which are associated with an increased risk of ventricular tachycardia TdP. In contrast, chlorpromazine did not affect the TA; however, it slightly increased AP duration via balanced inhibition of IKr and ICa currents. Moreover, mexiletine did not affect the TA, yet slightly reduced the AP duration via dominant inhibition of ICa currents, which are associated with a decreased risk of ventricular tachycardia TdP. Based on these results, we suggest that human iPSC-CMs can be extended to other preclinical protocols and can supplement drug safety assessments.

An Atlas Generation Method with Tiny Blocks Removal for Efficient 3DoF+ Video Coding (효율적인 3DoF+ 비디오 부호화를 위한 작은 블록 제거를 통한 아틀라스 생성 기법)

  • Lim, Sung-Gyun;Kim, Hyun-Ho;Kim, Jae-Gon
    • Journal of Broadcast Engineering
    • /
    • v.25 no.5
    • /
    • pp.665-671
    • /
    • 2020
  • MPEG-I is actively working on standardization on the coding of immersive video which provides up to 6 degree of freedom (6DoF) in terms of viewpoint. 3DoF+ video, which provides motion parallax to omnidirectional view of 360 video, renders a view at any desired viewpoint using multiple view videos acquisitioned in a limited 3D space covered with upper body motion at a fixed position. The MPEG-I visual group is developing a test model called TMIV (Test Model for Immersive Video) in the process of development of the standard for 3DoF+ video coding. In the TMIV, the redundancy between a set of input view videos is removed, and several atlases are generated by packing patches including the remaining texture and depth regions into frames as compact as possible, and coded. This paper presents an atlas generation method that removes small-sized blocks in the atlas for more efficient 3DoF+ video coding. The proposed method shows a performance improvement of BD-rate bit savings of 0.7% and 1.4%, respectively, in natural and graphic sequences compared to TMIV.

Adaptive Service Mode Conversion to Minimize Buffer Space Requirement in VOD Server (주문형 비디오 서버의 버퍼 최소화를 위한 가변적 서비스 모드 변환)

  • Won, Yu-Jip
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.28 no.5
    • /
    • pp.213-217
    • /
    • 2001
  • Excessive memory buffer requirement in continuous media playback is a serious impediment of wide spread usage of on-line multimedia service. Skewed access frequency of available video files provides an opportunity of re-using the date blocks which has been loaded by one session for later usage. We present novel algorithm which minimizes the buffer requirement in multiple sessions of multimedia playbacks. In continuous media playback originated from the disk, a certain amount of memory buffer is required to synchronize asynchronous disk. Read operation and synchronous playback operation. As aggregate playback bandwodth increases, larger amount of buffer needs to be allocated for this synchronization purpose. The focus of this work is to study the asymptotic behavior of the synchronization buffer requirement and to develop an algorithm coping with this excessive buffer requirement under bandwidth congestioon. We argue that in a large scale continuous media server, it may not be necessary to read the blocks for each session directly from the disk. The beauty of our work lies in the fact that it dynamically adapts to disk utilization of the server and finds the optimal way of servicinh the individual sessions while minimizing the overall buffer space requirement. Optimality of the proposed algorithm is shown by proof. The effectiveness and performance of the proposed scheme is examined via simulation.

  • PDF

Efficient Methods for Detecting Frame Characteristics and Objects in Video Sequences (내용기반 비디오 검색을 위한 움직임 벡터 특징 추출 알고리즘)

  • Lee, Hyun-Chang;Lee, Jae-Hyun;Jang, Ok-Bae
    • Journal of KIISE:Software and Applications
    • /
    • v.35 no.1
    • /
    • pp.1-11
    • /
    • 2008
  • This paper detected the characteristics of motion vector to support efficient content -based video search of video. Traditionally, the present frame of a video was divided into blocks of equal size and BMA (block matching algorithm) was used, which predicts the motion of each block in the reference frame on the time axis. However, BMA has several restrictions and vectors obtained by BMA are sometimes different from actual motions. To solve this problem, the foil search method was applied but this method is disadvantageous in that it has to make a large volume of calculation. Thus, as an alternative, the present study extracted the Spatio-Temporal characteristics of Motion Vector Spatio-Temporal Correlations (MVSTC). As a result, we could predict motion vectors more accurately using the motion vectors of neighboring blocks. However, because there are multiple reference block vectors, such additional information should be sent to the receiving end. Thus, we need to consider how to predict the motion characteristics of each block and how to define the appropriate scope of search. Based on the proposed algorithm, we examined motion prediction techniques for motion compensation and presented results of applying the techniques.

Declustering of High-dimensional Data by Cyclic Sliced Partitioning (주기적 편중 분할에 의한 다차원 데이터 디클러스터링)

  • Kim Hak-Cheol;Kim Tae-Wan;Li Ki-Joune
    • Journal of KIISE:Databases
    • /
    • v.31 no.6
    • /
    • pp.596-608
    • /
    • 2004
  • A lot of work has been done to reduce disk access time in I/O intensive systems, which store and handle massive amount of data, by distributing data across multiple disks and accessing them in parallel. Most of the previous work has focused on an efficient mapping from a grid cell to a disk number on the assumption that data space is regular grid-like partitioned. Although we can achieve good performance for low-dimensional data by grid-like partitioning, its performance becomes degenerate as grows the dimension of data even with a good disk allocation scheme. This comes from the fact that they partition entire data space equally regardless of distribution ratio of data objects. Most of the data in high-dimensional space exist around the surface of space. For that reason, we propose a new declustering algorithm based on the partitioning scheme which partition data space from the surface. With an unbalanced partitioning scheme, several experimental results show that we can remarkably reduce the number of data blocks touched by a query as grows the dimension of data and a query size. In this paper, we propose disk allocation schemes based on the layout of the resultant data blocks after partitioning. To show the performance of the proposed algorithm, we have performed several experiments with different dimensional data and for a wide range of number of disks. Our proposed disk allocation method gives a performance within 10 additive disk accesses compared with strictly optimal allocation scheme. We compared our algorithm with Kronecker sequence based declustering algorithm, which is reported to be the best among the grid partition and mapping function based declustering algorithms. We can improve declustering performance up to 14 times as grows dimension of data.

A Block Relocation Algorithm for Reducing Network Consumption in Hadoop Cluster (하둡 클러스터의 네트워크 사용량 감소를 위한 블록 재배치 알고리즘)

  • Kim, Jun-Sang;Kim, Chang-Hyeon;Lee, Won-Joo;Jeon, Chang-Ho
    • Journal of the Korea Society of Computer and Information
    • /
    • v.19 no.11
    • /
    • pp.9-15
    • /
    • 2014
  • In this paper, We propose a block reallocation algorithm for reducing network traffic in Hadoop cluster. The scheduler of Hadoop cluster receives a job from users. And the job is divided into multiple tasks assigned to nodes. At this time, the scheduler allocates the task to the node that satisfied data locality. If a task is assigned to the node that does not have the data(block) to be processed, the task is processed after the data transmission from another node. There is difference of workload among nodes because blocks in cluster have different access frequency. Therefore, the proposed algorithm relocates blocks according to the task allocation pattern of Hadoop scheduler. Eventually, workload of nodes are leveled, and the case of the task processing in a node that does not have the block to be processing is reduced. Thus, the network traffic of the cluster is also reduced. We evaluate the proposed block reallocation algorithm by a simulation. The simulation result shows maximum 23.3% reduction of network consumption than default delay scheduling for jobs processing.

Design and Implementation of User-Level FileSystem in the Combat Management System

  • Kang, Seok-Hyun;Kim, Keun-Hee
    • Journal of the Korea Society of Computer and Information
    • /
    • v.27 no.12
    • /
    • pp.9-16
    • /
    • 2022
  • In this paper, we propose a plan to design and utilize the RDBS(Record Block Data file management System) so that data can be recovered when data files in the Combat Management System are mismatched. The CMS(Combat Management System) manages the same files in multiple IPN(Infomation Processing Node) repositories to support multiplexing. However, mismatches in data files can occur due to equipment maintenance or user immaturity. The existing CMS does not manage the history of changes in data files, and when a mismatch occurs, data file were synchronized based on the latest date. But, It is difficult to say that files with the latest date have the highest reliability, and once the file synchronization has progressed, it cannot be recovered with pre-synchronization data. To solve this problem, data was stored and synchronized in units of record blocks using RDBS proposed in this paper, and the Rsync algorithm was used to reduce the overhead of file synchronization due to units of record blocks. SW applied with RDBS was tested for performance in a simulated environment, and it was confirmed that it could be applied to CMS through normal operation confirmation.