• Title/Summary/Keyword: split data

Search Result 596, Processing Time 0.031 seconds

Estimation of the Temporal and Spatial Variation of Surface Temperature Distribution in the Korean Peninsula using NOAA/AVHRR Data (NOAA/AVHRR 위성자료를 이용한 한반도 표면온도의 시공간적 변동 추정)

  • Suh, Young-Sang;Lee, Gi-Chul;Lee, Na-Kyung;Jo, Myung-Hee
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.8 no.3
    • /
    • pp.150-160
    • /
    • 2005
  • In this study, the spatiotemporal surface temperature changes were analyzed in the Korean peninsula. The surface temperature variation was estimated using the split window method and NOAA/AVHRR data in 1991, 1995 and 2000. The ranges of differences in temperature between day time and night time were $3-15^{\circ}C$ around the peninsula. The differences in seasonal variations and yearly fluctuations in big cities were lower than those in rural areas and showed clearly the effects of the urbanization. The characteristics of urban heat affects were further determined based on the day and night time temperature comparison on Busan metropolitan area between these periods. Finally, the future use of this technology was suggested for the urban environmental planning.

  • PDF

Lazy Bulk Insertion Method of Moving Objects Using Index Structure Estimation (색인 구조 예측을 통한 이동체의 지연 다량 삽입 기법)

  • Kim, Jeong-Hyun;Park, Sun-Young;Jang, Hyong-Il;Kim, Ho-Suk;Bae, Hae-Young
    • Journal of Korea Spatial Information System Society
    • /
    • v.7 no.3 s.15
    • /
    • pp.55-65
    • /
    • 2005
  • This paper presents a bulk insertion technique for efficiently inserting data items. Traditional moving object database focused on efficient query processing that happens mainly after index building. Traditional index structures rarely considered disk I/O overhead for index rebuilding by inserting data items. This paper, to solve this problem, describes a new bulk insertion technique which efficiently induces the current positions of moving objects and reduces update cost greatly. This technique uses buffering technique for bulk insertion in spatial index structures such as R-tree. To analyze split or merge node, we add a secondary index for information management on leaf node of primary index. And operations are classified to reduce unnecessary insertion and deletion. This technique decides processing order of moving objects, which minimize split and merge cost as a result of update operations. Experimental results show that this technique reduces insertion cost as compared with existing insertion techniques.

  • PDF

Implementation of Tiling System for JPEG 2000 (JPEG 2000을 위한 Tiling 시스템의 구현)

  • Jang, Won-Woo;Cho, Sung-Dae;Kang, Bong-Soon
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.9 no.3
    • /
    • pp.201-207
    • /
    • 2008
  • This paper presents the implementation of a Tiling System about Preprocessing functions of JPEG 2000. The system covers the JPEG 2000 standard and is designed to determine the size of the image, to expand the image area and to split input image into several tiles. In order to split the input image with the progressive transmission into several tiles and transmit a tile of this image to others, this system store this image into Frame Memory. Therefore, this is designed as the Finite State Machine (FSM) to sequence through specific patterns of states in a predetermined sequential manner by using Verilog-HDL and be designed to handle a maximum 5M image. Moreover, for identifying image size for expansion, we propose several formula which are based on remainder after division (rem). we propose the true table which determines the size of the image input patterns by using results of these formula. Under the condition of TSMC 0.25um ASIC library, gate count is 18,725 and maximum data arrival time is 18.94 [ns].

  • PDF

Data Compression Capable of Error Control Using Block-sorting and VF Arithmetic Code (블럭정렬과 VF형 산술부호에 의한 오류제어 기능을 갖는 데이터 압축)

  • Lee, Jin-Ho;Cho, Suk-Hee;Park, Ji-Hwan;Kang, Byong-Uk
    • The Transactions of the Korea Information Processing Society
    • /
    • v.2 no.5
    • /
    • pp.677-690
    • /
    • 1995
  • In this paper, we propose the high efficiency data compression capable of error control using block-sorting, move to front(MTF) and arithmetic code with variable length in to fixed out. First, the substring with is parsed into length N is shifted one by one symbol. The cyclic shifted rows are sorted in lexicographical order. Second, the MTF technique is applied to get the reference of locality in the sorted substring. Then the preprocessed sequence is coded using VF(variable to fixed) arithmetic code which can be limited the error propagation in one codeword. The key point is how to split the fixed length codeword in proportion to symbol probabilities in VF arithmetic code. We develop the new VF arithmetic coding that split completely the codeword set for arbitrary source alphabet. In addition to, an extended representation for symbol probability is designed by using recursive Gray conversion. The performance of proposed method is compared with other well-known source coding methods with respect to entropy, compression ratio and coding times.

  • PDF

Face Detection Using Shapes and Colors in Various Backgrounds

  • Lee, Chang-Hyun;Lee, Hyun-Ji;Lee, Seung-Hyun;Oh, Joon-Taek;Park, Seung-Bo
    • Journal of the Korea Society of Computer and Information
    • /
    • v.26 no.7
    • /
    • pp.19-27
    • /
    • 2021
  • In this paper, we propose a method for detecting characters in images and detecting facial regions, which consists of two tasks. First, we separate two different characters to detect the face position of the characters in the frame. For fast detection, we use You Only Look Once (YOLO), which finds faces in the image in real time, to extract the location of the face and mark them as object detection boxes. Second, we present three image processing methods to detect accurate face area based on object detection boxes. Each method uses HSV values extracted from the region estimated by the detection figure to detect the face region of the characters, and changes the size and shape of the detection figure to compare the accuracy of each method. Each face detection method is compared and analyzed with comparative data and image processing data for reliability verification. As a result, we achieved the highest accuracy of 87% when using the split rectangular method among circular, rectangular, and split rectangular methods.

PREDICTION OF RESIDUAL STRESS FOR DISSIMILAR METALS WELDING AT NUCLEAR POWER PLANTS USING FUZZY NEURAL NETWORK MODELS

  • Na, Man-Gyun;Kim, Jin-Weon;Lim, Dong-Hyuk
    • Nuclear Engineering and Technology
    • /
    • v.39 no.4
    • /
    • pp.337-348
    • /
    • 2007
  • A fuzzy neural network model is presented to predict residual stress for dissimilar metal welding under various welding conditions. The fuzzy neural network model, which consists of a fuzzy inference system and a neuronal training system, is optimized by a hybrid learning method that combines a genetic algorithm to optimize the membership function parameters and a least squares method to solve the consequent parameters. The data of finite element analysis are divided into four data groups, which are split according to two end-section constraints and two prediction paths. Four fuzzy neural network models were therefore applied to the numerical data obtained from the finite element analysis for the two end-section constraints and the two prediction paths. The fuzzy neural network models were trained with the aid of a data set prepared for training (training data), optimized by means of an optimization data set and verified by means of a test data set that was different (independent) from the training data and the optimization data. The accuracy of fuzzy neural network models is known to be sufficiently accurate for use in an integrity evaluation by predicting the residual stress of dissimilar metal welding zones.

Application of Proxy-basin Differential Split-Sampling and Blind-Validation Tests for Evaluating Hydrological Impact of Climate Change Using SWAT (SWAT을 이용한 기후변화의 수문학적 영향평가를 위한 Proxy-basin Differential Split-Sampling 및 Blind-Validation 테스트 적용)

  • Son, Kyong-Ho;Kim, Jeong-Kon
    • Journal of Korea Water Resources Association
    • /
    • v.41 no.10
    • /
    • pp.969-982
    • /
    • 2008
  • As hydrological models have been progressively developed, they are recognized as appropriate tools to manage water resources. Especially, the need to evaluate the effects of landuse and climate change on hydrological phenomena has been increased, which requires powerful validation methods for the hydrological models to be employed. As measured streamflow data at many locations may not be available, or include significant errors in application of hydrological models, streamflow data simulated by models only might be used to conduct hydrological analysis. In many cases, reducing errors in model simulations requires a powerful model validation method. In this research, we demonstrated a validation methodology of SWAT model using observed flow in two basins with different physical characteristics. First, we selected two basins, Gap-cheon basin and Yongdam basin located in the Guem River Basin, showing different hydrological characteristics. Next, the methodology developed to estimate parameter values for the Gap-cheon basin was applied for estimating those for the Yongdam basin without calibration a priori, and sought for validation of the SWAT. Application result with SWAT for Yongdam basin showed $R_{eff}$ ranging from 0.49 to 0.85, and $R^{2}$ from 0.49 to 0.84. As well, comparison of predicted flow and measured flow in each subbasin showed reasonable agreement. Furthermore, the model reproduced the whole trends of measured total flow and low flow, though peak flows were rather underestimated. The results of this study suggest that SWAT can be applied for predicting effects of future climate and landuse changes on flow variability in river basins. However, additional studies are recommended to further verify the validity of the mixed method in other river basins.

A maximum likelihood approach to infer demographic models

  • Chung, Yujin
    • Communications for Statistical Applications and Methods
    • /
    • v.27 no.3
    • /
    • pp.385-395
    • /
    • 2020
  • We present a new maximum likelihood approach to estimate demographic history using genomic data sampled from two populations. A demographic model such as an isolation-with-migration (IM) model explains the genetic divergence of two populations split away from their common ancestral population. The standard probability model for an IM model contains a latent variable called genealogy that represents gene-specific evolutionary paths and links the genetic data to the IM model. Under an IM model, a genealogy consists of two kinds of evolutionary paths of genetic data: vertical inheritance paths (coalescent events) through generations and horizontal paths (migration events) between populations. The computational complexity of the IM model inference is one of the major limitations to analyze genomic data. We propose a fast maximum likelihood approach to estimate IM models from genomic data. The first step analyzes genomic data and maximizes the likelihood of a coalescent tree that contains vertical paths of genealogy. The second step analyzes the estimated coalescent trees and finds the parameter values of an IM model, which maximizes the distribution of the coalescent trees after taking account of possible migration events. We evaluate the performance of the new method by analyses of simulated data and genomic data from two subspecies of common chimpanzees in Africa.

Utility Evaluation on Application of Geometric Mean Depending on Depth of Kidney in Split Renal Function Test Using 99mTc-MAG3 (99mTc-MAG3를 이용한 상대적 신장 기능 평가 시 신장 깊이에 따른 기하평균 적용의 유용성 평가)

  • Lee, Eun-Byeul;Lee, Wang-Hui;Ahn, Sung-Min
    • Journal of radiological science and technology
    • /
    • v.39 no.2
    • /
    • pp.199-208
    • /
    • 2016
  • $^{99}mTc-MAG_3$ Renal scan is a method that acquires dynamic renal scan image by using $^{99}mTc-MAG_3$ and dynamically visualizes process of radioactive agent being absorbed to kidney and excreted continuously. Once the test starts, ratio in both kidneys in 1~2.5 minutes was measured to obtain split renal function and split renal function can be expressed in ratio based on overall renal function. This study is based on compares split renal function obtained from data acquired from posterior detector, which is a conventional renal function test method, with split renal function acquired from the geometric mean of values obtained from anterior and posterior detectors, and studies utility of attenuation compensation depending on difference in geometric mean kidney depth. From July, 2015 to February 2016, 33 patients who undertook $^{99}mTc-MAG_3$ Renal scan(13 male, 20 female, average age of 44.66 with range of 5~70, average height of 160.40cm, average weight of 55.40kg) were selected as subjects. Depth of kidney was shown to be 65.82 mm at average for left and 71.62 mm at average for right. In supine position, 30 out of 33 patients showed higher ratio of deep-situated kidney and lower ratio of shallow-situated kidney. Such result is deemed to be due to correction by attenuation between deep-situated kidney and detector and in case where there is difference between the depth of both kidneys such as, lesions in or around kidney, spine malformation, and ectopic kidney, ratio of deep-situated kidney must be compensated for more accurate calculation of split renal function, when compared to the conventional test method (posterior detector counting).

Performance Analysis on Declustering High-Dimensional Data by GRID Partitioning (그리드 분할에 의한 다차원 데이터 디클러스터링 성능 분석)

  • Kim, Hak-Cheol;Kim, Tae-Wan;Li, Ki-Joune
    • The KIPS Transactions:PartD
    • /
    • v.11D no.5
    • /
    • pp.1011-1020
    • /
    • 2004
  • A lot of work has been done to improve the I/O performance of such a system that store and manage a massive amount of data by distributing them across multiple disks and access them in parallel. Most of the previous work has focused on an efficient mapping from a grid ceil, which is determined bY the interval number of each dimension, to a disk number on the assumption that each dimension is split into disjoint intervals such that entire data space is GRID-like partitioned. However, they have ignored the effects of a GRID partitioning scheme on declustering performance. In this paper, we enhance the performance of mapping function based declustering algorithms by applying a good GRID par-titioning method. For this, we propose an estimation model to count the number of grid cells intersected by a range query and apply a GRID partitioning scheme which minimizes query result size among the possible schemes. While it is common to do binary partition for high-dimensional data, we choose less number of dimensions than needed for binary partition and split several times along that dimensions so that we can reduce the number of grid cells touched by a query. Several experimental results show that the proposed estimation model gives accuracy within 0.5% error ratio regardless of query size and dimension. We can also improve the performance of declustering algorithm based on mapping function, called Kronecker Sequence, which has been known to be the best among the mapping functions for high-dimensional data, up to 23 times by applying an efficient GRID partitioning scheme.