• Title/Summary/Keyword: Large-volume data

Search Result 731, Processing Time 0.026 seconds

Volume Rendering using Grid Computing for Large-Scale Volume Data

  • Nishihashi, Kunihiko;Higaki, Toru;Okabe, Kenji;Raytchev, Bisser;Tamaki, Toru;Kaneda, Kazufumi
    • International Journal of CAD/CAM
    • /
    • v.9 no.1
    • /
    • pp.111-120
    • /
    • 2010
  • In this paper, we propose a volume rendering method using grid computing for large-scale volume data. Grid computing is attractive because medical institutions and research facilities often have a large number of idle computers. A large-scale volume data is divided into sub-volumes and the sub-volumes are rendered using grid computing. When using grid computing, different computers rarely have the same processor speeds. Thus the return order of results rarely matches the sending order. However order is vital when combining results to create a final image. Job-Scheduling is important in grid computing for volume rendering, so we use an obstacle-flag which changes priorities dynamically to manage sub-volume results. Obstacle-Flags manage visibility of each sub-volume when line of sight from the view point is obscured by other subvolumes. The proposed Dynamic Job-Scheduling based on visibility substantially increases efficiency. Our Dynamic Job-Scheduling method was implemented on our university's campus grid and we conducted comparative experiments, which showed that the proposed method provides significant improvements in efficiency for large-scale volume rendering.

BIM Geometry Cache Structure for Data Streaming with Large Volume (대용량 BIM 형상 데이터 스트리밍을 위한 캐쉬 구조)

  • Kang, Tae-Wook
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.18 no.9
    • /
    • pp.1-8
    • /
    • 2017
  • The purpose of this study is to propose a cache structure for processing large-volume building information modeling (BIM) geometry data,whereit is difficult to allocate physical memory. As the number of BIM orders has increased in the public sector, it is becoming more common to visualize and calculate large-volume BIM geometry data. Design and review collaboration can require a lot of time to download large-volume BIM data through the network. If the BIM data exceeds the physical free-memory limit, visualization and geometry computation cannot be possible. In order to utilize large amounts of BIM data on insufficient physical memory or a low-bandwidth network, it is advantageous to cache only the data necessary for BIM geometry rendering and calculation time. Thisstudy proposes acache structure for efficiently rendering and calculating large-volume BIM geometry data where it is difficult to allocate enough physical memory.

A Study on the Improvement of Large-Volume Scalable Spatial Data for VWorld Desktop (브이월드 데스크톱을 위한 대용량 공간정보 데이터 지원 방안 연구)

  • Kang, Ji-Hun;Kim, Hyeon-Deok;Kim, Jung-Ok
    • Journal of Cadastre & Land InformatiX
    • /
    • v.45 no.1
    • /
    • pp.169-179
    • /
    • 2015
  • Recently, as the amount of data increases rapidly, the development of IT technology entered the 'Big Data' era, dealing with large-volume of data at once. In the spatial field, a spatial data service technology is required to use that various and big amount of data. In this study, firstly, we explained the technology of typical spatial information data services abroad, and then we have developed large KML data processing techniques those can be applied as KML format to VWorld desktop. The test was conducted using a large KML data in order to verify the development KML partitioned methods and tools. As a result, the index file and the divided files are produced and it was visible in VWorld desktop.

A Block-Based Volume Rendering Algorithm Using Shear-Warp factorization (쉬어-왑 분해를 이용한 블록 기반의 볼륨 렌더링 기법)

  • 권성민;김진국;박현욱;나종범
    • Journal of Biomedical Engineering Research
    • /
    • v.21 no.4
    • /
    • pp.433-439
    • /
    • 2000
  • Volume rendering is a powerful tool for visualizing sampled scalar values from 3D data without modeling geometric primitives to the data. The volume rendering can describe the surface-detail of a complex object. Owing to this characteristic. volume rendering has been used to visualize medical data. The size of volume data is usually too big to handle in real time. Recently, various volume rendering algorithms have been proposed in order to reduce the rendering time. However, most of the proposed algorithms are not proper for fast rendering of large non-coded volume data. In this paper, we propose a block-based fast volume rendering algorithm using a shear-warp factorization for non-coded volume data. The algorithm performs volume rendering by using the organ segmentation data as well as block-based 3D volume data, and increases the rendering speed for large non-coded volume data. The proposed algorithm is evaluated by rendering 3D X-ray CT body images and MR head images.

  • PDF

Cost-Efficient and Automatic Large Volume Data Acquisition Method for On-Chip Random Process Variation Measurement

  • Lee, Sooeun;Han, Seungho;Lee, Ikho;Sim, Jae-Yoon;Park, Hong-June;Kim, Byungsub
    • JSTS:Journal of Semiconductor Technology and Science
    • /
    • v.15 no.2
    • /
    • pp.184-193
    • /
    • 2015
  • This paper proposes a cost-efficient and automatic method for large data acquisition from a test chip without expensive equipment to characterize random process variation in an integrated circuit. Our method requires only a test chip, a personal computer, a cheap digital-to-analog converter, a controller and multimeters, and thus large volume measurement can be performed on an office desk at low cost. To demonstrate the proposed method, we designed a test chip with a current model logic driver and an array of 128 current mirrors that mimic the random process variation of the driver's tail current mirror. Using our method, we characterized the random process variation of the driver's voltage due to the random process variation on the driver's tail current mirror from large volume measurement data. The statistical characteristics of the driver's output voltage calculated from the measured data are compared with Monte Carlo simulation. The difference between the measured and the simulated averages and standard deviations are less than 20% showing that we can easily characterize the random process variation at low cost by using our cost-efficient automatic large data acquisition method.

Compression of time-varying volume data using Daubechies D4 filter (Daubechies D4 필터를 사용한 시간가변(time-varying) 볼륨 데이터의 압축)

  • Hur, Young-Ju;Lee, Joong-Youn;Koo, Gee-Bum
    • 한국HCI학회:학술대회논문집
    • /
    • 2007.02a
    • /
    • pp.982-987
    • /
    • 2007
  • The necessity of data compression scheme for volume data has been increased because of the increase of data capacity and the amount of network uses. Now we have various kinds of compression schemes, and we can choose one of them depending on the data types, application fields, the preferences, etc. However, the capacity of data which is produced by application scientists has been excessively increased, and the format of most scientific data is 3D volume. For 2D image or 3D moving pictures, many kinds of standards are established and widely used, but for 3D volume data, specially time-varying volume data, it is very difficult to find any applicable compression schemes. In this paper, we present a compression scheme for encoding time-varying volume data. This scheme is aimed to encoding time-varying volume data for visualization. This scheme uses MPEG's I- and P-frame concept for raising compression ratio. Also, it transforms volume data using Daubechies D4 filter before encoding, so that the image quality is better than other wavelet-based compression schemes. This encoding scheme encodes time-varying volume data composed of single precision floating-point data. In addition, this scheme provides the random reconstruction accessibility for an unit, and can be used for compressing large time-varying volume data using correlation between frames while preserving image qualities.

  • PDF

Large-scale Structure Studies with Mock Galaxy Sample from the Horizon Run 4 & Multiverse Simulations

  • Hong, Sungwook E.
    • The Bulletin of The Korean Astronomical Society
    • /
    • v.45 no.1
    • /
    • pp.29.3-29.3
    • /
    • 2020
  • Cosmology is a study to understand the origin, fundamental property, and evolution of the universe. Nowadays, many observational data of galaxies have become available, and one needs large-volume numerical simulations with good quality of the spatial distribution for a fair comparison with observation data. On the other hand, since galaxies' evolution is affected by both gravitational and baryonic effects, it is nontrivial to populate galaxies only by N-body simulations. However, full hydrodynamic simulations with large volume are computationally costly. Therefore, alternative galaxy assignment methods to N-body simulations are necessary for successful cosmological studies. In this talk, I would like to introduce the MBP-galaxy abundance matching. This novel galaxy assignment method agrees with the spatial distribution of observed galaxies between 0.1Mpc ~ 100Mpc scales. I also would like to introduce mock galaxy catalogs of the Horizon Run 4 and Multiverse simulations, large-volume cosmological N-body simulations done by the Korean community. Finally, I would like to introduce some recent works with those mock galaxies used to understand our universe better.

  • PDF

Flow Visualization Model Based on B-spline Volume (비스플라인 부피에 기초한 유동 가시화 모델)

  • 박상근;이건우
    • Korean Journal of Computational Design and Engineering
    • /
    • v.2 no.1
    • /
    • pp.11-18
    • /
    • 1997
  • Scientific volume visualization addresses the representation, manipulation, and rendering of volumetric data sets, providing mechanisms for looking closely into structures and understanding their complexity and dynamics. In the past several years, a tremendous amount of research and development has been directed toward algorithms and data modeling methods for a scientific data visualization. But there has been very little work on developing a mathematical volume model that feeds this visualization. Especially, in flow visualization, the volume model has long been required as a guidance to display the very large amounts of data resulting from numerical simulations. In this paper, we focus on the mathematical representation of volumetric data sets and the method of extracting meaningful information from the derived volume model. For this purpose, a B-spline volume is extended to a high dimensional trivariate model which is called as a flow visualization model in this paper. Two three-dimensional examples are presented to demonstrate the capabilities of this model.

  • PDF

A Method for Fuzzy-Data Processing of Cooked-rice Portion Size Estimation (식품 눈대중량 퍼지데이타의 처리방안에 관한 연구)

  • 김명희
    • Journal of Nutrition and Health
    • /
    • v.27 no.8
    • /
    • pp.856-863
    • /
    • 1994
  • To develop a optimized method for educing the errors associated with the estimation of portion size of foods, fuzzy-dta processing of portion size was performed. Cooked-rice was chosen as a food item. The experiment was conducted in two parts. First, to study the conceptions of respondents to bowl size(large, medium, small), 11 bowls of different size and shape were used and measured the actual weights of cooked-rice. Second, to study the conceptions of respondents to volume(1, 1/2, 1/3, 1/4), 16 different volumes of cooked-rice in bowls of same size and shape were used. Respondents for this study were 31 graduate students. After collecting the data of respondents to size and volume, fuzzy sets of size and volume were produced. The critical values were calculated by defuzzification(mean of maximum method, center of area method). The differences of the weights of cooked-rice in various bowl size and volume between the critical values and the calculated values by average portion size using in conventional methods were compared. The results hows large inter-subject variation in conception to bowl size, especially in large size. However, conception of respondents to volume is relatively accurate. Conception to bowl size seems to be influenced by bowl shape. Considering that the new fuzzy set was calculated by cartesian product(bowl size and volume), bowl shape should be considered in estimation of bowl size to make more accurate fuzzy set for cooked-rice portion size. The limitations of this study were discussed. If more accurate data for size and volume of many other food items are collected by the increased number of respondents, reducing the errors associated with the estimation of portion size of foods and rapid processing will be possible by constructing computer processing systems.

  • PDF

Volumetric Data Encoding Using Daubechies Wavelet Filter (Daubechies 웨이블릿 필터를 사용한 볼륨 데이터 인코딩)

  • Hur, Young-Ju;Park, Sang-Hun
    • The KIPS Transactions:PartA
    • /
    • v.13A no.7 s.104
    • /
    • pp.639-646
    • /
    • 2006
  • Data compression technologies enable us to store and transfer large amount of data efficiently, and become more and more important due to increasing data size and the network traffic. Moreover, as a result of the increase of computing power, volumetric data produced from various applied science and engineering fields has been getting much larger. In this Paper, we present a volume compression scheme which exploits Daubeches wavelet transform. The proposed scheme basically supports lossy compression for 3D volume data, and provides unit-wise random accessibility. Since our scheme shows far lower error rates than the previous compression methods based on Haar filter, it could be used well for interactive visualization applications as well as large volume data compression requiring image fidelity.