• Title/Summary/Keyword: large-scale data

Search Result 2,767, Processing Time 0.035 seconds

High-Resolution Tiled Display System for Visualization of Large-scale Analysis Data (초대형 해석 결과의 분석을 위한 고해상도 타일 가시화 시스템 개발)

  • 김홍성;조진연;양진오
    • Journal of the Korean Society for Aeronautical & Space Sciences
    • /
    • v.34 no.6
    • /
    • pp.67-74
    • /
    • 2006
  • In this paper, a tiled display system is developed to get a high-resolution image in visualization of large-scale structural analysis data with low-resolution display devices and low-cost cluster computer system. Concerning the hardware system, some of the crucial points are investigated, and a new beam-projector positioner is designed and manufactured to resolve the keystone phenomena which result in distorted image. In the development of tiled display software, Qt and OpenGL are utilized for GUI and rendering, respectively. To obtain the entire tiled image, LAM-MPI is utilized to synchronize the several sub-images produced from each cluster computer node.

Crop Leaf Disease Identification Using Deep Transfer Learning

  • Changjian Zhou;Yutong Zhang;Wenzhong Zhao
    • Journal of Information Processing Systems
    • /
    • v.20 no.2
    • /
    • pp.149-158
    • /
    • 2024
  • Traditional manual identification of crop leaf diseases is challenging. Owing to the limitations in manpower and resources, it is challenging to explore crop diseases on a large scale. The emergence of artificial intelligence technologies, particularly the extensive application of deep learning technologies, is expected to overcome these challenges and greatly improve the accuracy and efficiency of crop disease identification. Crop leaf disease identification models have been designed and trained using large-scale training data, enabling them to predict different categories of diseases from unlabeled crop leaves. However, these models, which possess strong feature representation capabilities, require substantial training data, and there is often a shortage of such datasets in practical farming scenarios. To address this issue and improve the feature learning abilities of models, this study proposes a deep transfer learning adaptation strategy. The novel proposed method aims to transfer the weights and parameters from pre-trained models in similar large-scale training datasets, such as ImageNet. ImageNet pre-trained weights are adopted and fine-tuned with the features of crop leaf diseases to improve prediction ability. In this study, we collected 16,060 crop leaf disease images, spanning 12 categories, for training. The experimental results demonstrate that an impressive accuracy of 98% is achieved using the proposed method on the transferred ResNet-50 model, thereby confirming the effectiveness of our transfer learning approach.

Design of Large Cone Calorimeter for the Fire Study (화재연구를 위한 대형 콘 칼로리미터의 설계)

  • Lee, Eui-Ju
    • Fire Science and Engineering
    • /
    • v.20 no.4 s.64
    • /
    • pp.65-71
    • /
    • 2006
  • Some major properties such as a heat release rate have been measured experimentally for the validation of fire model and the clarification of fire phenomena as the study is more rigorous recently. Although the reduced-scale experiment also provides the basic data and the physical understanding in fire study, it is not enough to explain real fire problem directly because there is no exact analogy theory between a real fire and the reduced scale model. Therefore, large cone calorimeter have been built and used in a few foreign countries for the measurement of large scale fire. This paper addressed the theoretical background and the description of key features in the design of the facility. It will be a useful guide for implementation of the large scale cone calorimeter in the future.

Current Status of Water Electrolysis Technology and Large-scale Demonstration Projects in Korea and Overseas (국내외 수전해 기술 및 대규모 실증 프로젝트 진행 현황)

  • JONGMIN BAEK;SU HYUN KIM
    • Transactions of the Korean hydrogen and new energy society
    • /
    • v.35 no.1
    • /
    • pp.14-26
    • /
    • 2024
  • Global efforts continue with the goal of transition to a "carbon neutral (net zero)" society with zero carbon emissions by 2050. For this purpose, the technology of water electrolysis is being developed, which can store electricity generated from renewable energies in large quantities and over a long period of time as hydrogen. Recently, various research and large-scale projects on 'green hydrogen', which has no carbon emissions, are being conducted. In this paper, a comparison of water electrolysis technologies was carried out and, based on data provided by the International Energy Agency (IEA), large-scale water electrolysis demonstration projects were analyzed by classifying them by technology, power supply, country and end user. It is expected that through the analysis of large-scale water electrolysis demonstration projects, research directions and road maps can be provided for the development/implementation of commercial projects in the future.

Computational analysis of large-scale genome expression data

  • Zhang, Michael
    • Proceedings of the Korean Society for Bioinformatics Conference
    • /
    • 2000.11a
    • /
    • pp.41-44
    • /
    • 2000
  • With the advent of DNA microarray and "chip" technologies, gene expression in an organism can be monitored on a genomic scale, allowing the transcription levels of many genes to be measured simultaneously. Functional interpretation of massive expression data and linking such data to DNA sequences have become the new challenges to bioinformatics. I will us yeast cell cycle expression data analysis as an example to demonstrate how special database and computational methods may be used for extracting functional information, I will also briefly describe a novel clustering algorithm which has been applied to the cell cycle data.

  • PDF

HORIZON RUN 4 SIMULATION: COUPLED EVOLUTION OF GALAXIES AND LARGE-SCALE STRUCTURES OF THE UNIVERSE

  • KIM, JUHAN;PARK, CHANGBOM;L'HUILLIER, BENJAMIN;HONG, SUNGWOOK E.
    • Journal of The Korean Astronomical Society
    • /
    • v.48 no.4
    • /
    • pp.213-228
    • /
    • 2015
  • The Horizon Run 4 is a cosmological N-body simulation designed for the study of coupled evolution between galaxies and large-scale structures of the Universe, and for the test of galaxy formation models. Using 63003 gravitating particles in a cubic box of Lbox = 3150 h−1Mpc, we build a dense forest of halo merger trees to trace the halo merger history with a halo mass resolution scale down to Ms = 2.7 × 1011h−1M. We build a set of particle and halo data, which can serve as testbeds for comparison of cosmological models and gravitational theories with observations. We find that the FoF halo mass function shows a substantial deviation from the universal form with tangible redshift evolution of amplitude and shape. At higher redshifts, the amplitude of the mass function is lower, and the functional form is shifted toward larger values of ln(1/σ). We also find that the baryonic acoustic oscillation feature in the two-point correlation function of mock galaxies becomes broader with a peak position moving to smaller scales and the peak amplitude decreasing for increasing directional cosine μ compared to the linear predictions. From the halo merger trees built from halo data at 75 redshifts, we measure the half-mass epoch of halos and find that less massive halos tend to reach half of their current mass at higher redshifts. Simulation outputs including snapshot data, past lightcone space data, and halo merger data are available at http://sdss.kias.re.kr/astro/Horizon-Run4.

Very deep super-resolution for efficient cone-beam computed tomographic image restoration

  • Hwang, Jae Joon;Jung, Yun-Hoa;Cho, Bong-Hae;Heo, Min-Suk
    • Imaging Science in Dentistry
    • /
    • v.50 no.4
    • /
    • pp.331-337
    • /
    • 2020
  • Purpose: As cone-beam computed tomography (CBCT) has become the most widely used 3-dimensional (3D) imaging modality in the dental field, storage space and costs for large-capacity data have become an important issue. Therefore, if 3D data can be stored at a clinically acceptable compression rate, the burden in terms of storage space and cost can be reduced and data can be managed more efficiently. In this study, a deep learning network for super-resolution was tested to restore compressed virtual CBCT images. Materials and Methods: Virtual CBCT image data were created with a publicly available online dataset (CQ500) of multidetector computed tomography images using CBCT reconstruction software (TIGRE). A very deep super-resolution (VDSR) network was trained to restore high-resolution virtual CBCT images from the low-resolution virtual CBCT images. Results: The images reconstructed by VDSR showed better image quality than bicubic interpolation in restored images at various scale ratios. The highest scale ratio with clinically acceptable reconstruction accuracy using VDSR was 2.1. Conclusion: VDSR showed promising restoration accuracy in this study. In the future, it will be necessary to experiment with new deep learning algorithms and large-scale data for clinical application of this technology.

Automatic 3D soil model generation for southern part of the European side of Istanbul based on GIS database

  • Sisman, Rafet;Sahin, Abdurrahman;Hori, Muneo
    • Geomechanics and Engineering
    • /
    • v.13 no.6
    • /
    • pp.893-906
    • /
    • 2017
  • Automatic large scale soil model generation is very critical stage for earthquake hazard simulation of urban areas. Manual model development may cause some data losses and may not be effective when there are too many data from different soil observations in a wide area. Geographic information systems (GIS) for storing and analyzing spatial data help scientists to generate better models automatically. Although the original soil observations were limited to soil profile data, the recent developments in mapping technology, interpolation methods, and remote sensing have provided advanced soil model developments. Together with advanced computational technology, it is possible to handle much larger volumes of data. The scientists may solve difficult problems of describing the spatial variation of soil. In this study, an algorithm is proposed for automatic three dimensional soil and velocity model development of southern part of the European side of Istanbul next to Sea of Marmara based on GIS data. In the proposed algorithm, firstly bedrock surface is generated from integration of geological and geophysical measurements. Then, layer surface contacts are integrated with data gathered in vertical borings, and interpolations are interpreted on sections between the borings automatically. Three dimensional underground geology model is prepared using boring data, geologic cross sections and formation base contours drawn in the light of these data. During the preparation of the model, classification studies are made based on formation models. Then, 3D velocity models are developed by using geophysical measurements such as refraction-microtremor, array microtremor and PS logging. The soil and velocity models are integrated and final soil model is obtained. All stages of this algorithm are carried out automatically in the selected urban area. The system directly reads the GIS soil data in the selected part of urban area and 3D soil model is automatically developed for large scale earthquake hazard simulation studies.

Optimization of parameters in segmentation of large-scale spatial data sets (대용량 공간 자료들의 세그먼테이션에서의 모수들의 최적화)

  • Oh, Mi-Ra;Lee, Hyun-Ju
    • Proceedings of the IEEK Conference
    • /
    • 2008.06a
    • /
    • pp.897-898
    • /
    • 2008
  • Array comparative genomic hybridization (aCGH) has been used to detect chromosomal regions of amplifications or deletions, which allows identification of new cancer related genes. As aCGH, a large-scale spatial data, contains significant amount of noises in its raw data, it has been an important research issue to segment genomic DNA regions to detect its true underlying copy number aberrations (CNAs). In this study, we focus on applying a segmentation method to multiple data sets. We compare two different threshold values for analyzing aCGH data with CBS method [1]. The proposed threshold values are p-value or $Q{\pm}1.5IQR$ and $Q{\pm}1.5IQR$.

  • PDF

Statistical Issues in Genomic Cohort Studies (유전체 코호트 연구의 주요 통계학적 과제)

  • Park, So-Hee
    • Journal of Preventive Medicine and Public Health
    • /
    • v.40 no.2
    • /
    • pp.108-113
    • /
    • 2007
  • When conducting large-scale cohort studies, numerous statistical issues arise from the range of study design, data collection, data analysis and interpretation. In genomic cohort studies, these statistical problems become more complicated, which need to be carefully dealt with. Rapid technical advances in genomic studies produce enormous amount of data to be analyzed and traditional statistical methods are no longer sufficient to handle these data. In this paper, we reviewed several important statistical issues that occur frequently in large-scale genomic cohort studies, including measurement error and its relevant correction methods, cost-efficient design strategy for main cohort and validation studies, inflated Type I error, gene-gene and gene-environment interaction and time-varying hazard ratios. It is very important to employ appropriate statistical methods in order to make the best use of valuable cohort data and produce valid and reliable study results.