• Title/Summary/Keyword: 데이터 분할 평가

Search Result 497, Processing Time 0.033 seconds

Low-Power Data Cache Architecture and Microarchitecture-level Management Policy for Multimedia Application (멀티미디어 응용을 위한 저전력 데이터 캐쉬 구조 및 마이크로 아키텍쳐 수준 관리기법)

  • Yang Hoon-Mo;Kim Cheong-Gil;Park Gi-Ho;Kim Shin-Dug
    • The KIPS Transactions:PartA
    • /
    • v.13A no.3 s.100
    • /
    • pp.191-198
    • /
    • 2006
  • Today's portable electric consumer devices, which are operated by battery, tend to integrate more multimedia processing capabilities. In the multimedia processing devices, multimedia system-on-chips can handle specific algorithms which need intensive processing capabilities and significant power consumption. As a result, the power-efficiency of multimedia processing devices becomes important increasingly. In this paper, we propose a reconfigurable data caching architecture, in which data allocation is constrained by software support, and evaluate its performance and power efficiency. Comparing with conventional cache architectures, power consumption can be reduced significantly, while miss rate of the proposed architecture is very similar to that of the conventional caches. The reduction of power consumption for the reconfigurable data cache architecture shows 33.2%, 53.3%, and 70.4%, when compared with direct-mapped, 2-way, and 4-way caches respectively.

Skewed Data Handling Technique Using an Enhanced Spatial Hash Join Algorithm (개선된 공간 해쉬 조인 알고리즘을 이용한 편중 데이터 처리 기법)

  • Shim Young-Bok;Lee Jong-Yun
    • The KIPS Transactions:PartD
    • /
    • v.12D no.2 s.98
    • /
    • pp.179-188
    • /
    • 2005
  • Much research for spatial join has been extensively studied over the last decade. In this paper, we focus on the filtering step of candidate objects for spatial join operations on the input tables that none of the inputs is indexed. In this case, many algorithms has presented and showed excellent performance over most spatial data. However, if data sets of input table for the spatial join ale skewed, the join performance is dramatically degraded. Also, little research on solving the problem in the presence of skewed data has been attempted. Therefore, we propose a spatial hash strip join (SHSJ) algorithm that combines properties of the existing spatial hash join (SHJ) algorithm based on spatial partition for input data set's distribution and SSSJ algorithm. Finally, in order to show SHSJ the outperform in uniform/skew cases, we experiment SHSJ using the Tiger/line data sets and compare it with the SHJ algorithm.

Fuzzy Minimum Interval Partition for Uncertain Time Interval (불확실한 시간 간격을 위한 퍼지 최소 간격 분할 기법)

  • Heo, Mun-Haeng;Lee, Gwang-Gyu;Lee, Jun-Uk;Ryu, Geun-Ho;Kim, Hong-Gi
    • The KIPS Transactions:PartD
    • /
    • v.9D no.4
    • /
    • pp.571-578
    • /
    • 2002
  • In temporal database, extended time dimension for history management brings about complexity of join operation and increased cost. To solve this problem, a method that joins the divided segment time data after partition the time range into fixed time interval is introduced. But existing methods can't solve the ambiguity problem of time border that caused by temporal granularity in the partition point. In this paper, We suggested Fuzzy Minimum Interval Partition (FMIP) method that introduced the possibility distribution of fuzzy theory considered uncertainty time interval border in the partition line.

Characteristics of Fuzzy Inference Systems by Means of Partition of Input Spaces in Nonlinear Process (비선형 공정에서의 입력 공간 분할에 의한 퍼지 추론 시스템의 특성 분석)

  • Park, Keon-Jun;Lee, Dong-Yoon
    • The Journal of the Korea Contents Association
    • /
    • v.11 no.3
    • /
    • pp.48-55
    • /
    • 2011
  • In this paper, we analyze the input-output characteristics of fuzzy inference systems according to the division of entire input spaces and the fuzzy reasoning methods to identify the fuzzy model for nonlinear process. And fuzzy model is expressed by identifying the structure and parameters of the system by means of input variables, fuzzy partition of input spaces, and consequence polynomial functions. In the premise part of the rules Min-Max method using the minimum and maximum values of input data set and C-Means clustering algorithm forming input data into the hard clusters are used for identification of fuzzy model and membership function is used as a series of triangular membership function. In the consequence part of the rules fuzzy reasoning is conducted by two types of inferences. The identification of the consequence parameters, namely polynomial coefficients, of the rules are carried out by the standard least square method. And lastly, we use gas furnace process which is widely used in nonlinear process and we evaluate the performance for this nonlinear process.

Image Segmentation by Cascaded Superpixel Merging with Privileged Information (단계적 슈퍼픽셀 병합을 통한 이미지 분할 방법에서 특권정보의 활용 방안)

  • Park, Yongjin
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.23 no.9
    • /
    • pp.1049-1059
    • /
    • 2019
  • We propose a learning-based image segmentation algorithm. Starting from super-pixels, our method learns the probability of merging two regions based on the ground truth made by humans. The learned information is used in determining whether the two regions should be merged or not in a segmentation stage. Unlike exiting learning-based algorithms, we use both local and object information. The local information represents features computed from super-pixels and the object information represent high level information available only in the learning process. The object information is considered as privileged information, and we can use a framework that utilize the privileged information such as SVM+. In experiments on the Berkeley Segmentation Dataset and Benchmark (BSDS 500) and PASCAL Visual Object Classes Challenge (VOC 2012) data set, out model exhibited the best performance with a relatively small training data set and also showed competitive results with a sufficiently large training data set.

Semantic Building Segmentation Using the Combination of Improved DeepResUNet and Convolutional Block Attention Module (개선된 DeepResUNet과 컨볼루션 블록 어텐션 모듈의 결합을 이용한 의미론적 건물 분할)

  • Ye, Chul-Soo;Ahn, Young-Man;Baek, Tae-Woong;Kim, Kyung-Tae
    • Korean Journal of Remote Sensing
    • /
    • v.38 no.6_1
    • /
    • pp.1091-1100
    • /
    • 2022
  • As deep learning technology advances and various high-resolution remote sensing images are available, interest in using deep learning technology and remote sensing big data to detect buildings and change in urban areas is increasing significantly. In this paper, for semantic building segmentation of high-resolution remote sensing images, we propose a new building segmentation model, Convolutional Block Attention Module (CBAM)-DRUNet that uses the DeepResUNet model, which has excellent performance in building segmentation, as the basic structure, improves the residual learning unit and combines a CBAM with the basic structure. In the performance evaluation using WHU dataset and INRIA dataset, the proposed building segmentation model showed excellent performance in terms of F1 score, accuracy and recall compared to ResUNet and DeepResUNet including UNet.

Processing of uncertain position of regularly sampling moving objects (주기적인 위치보고 이동체의 불확실 위치 처리)

  • 진희규;김동현;임덕성;홍봉희
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2004.10b
    • /
    • pp.241-243
    • /
    • 2004
  • 위치기반서비스 응용 분야에서 위치 데이터를 저장하기 위하여 일반적으로 이동체의 위치 데이터를 주기적으로 수집한다. 주기적으로 수집된 위치 데이터는 보고 주기 사이의 위치 변화를 반영하지 못하기 때문에 시간에 대한 선형 함수를 이용하여 예측된 위치 데이터와 오차가 발생한다. 따라서 오차가 존재하는 불확실한 미래 위치 데이터로 인하여 미래 위치 색인에서 검색의 정확도가 떨어지는 문제점이 발생한다. 이 논문에서는 주기적인 위치보고 이동체에서 발생하는 불확실한 위치 데이터를 처리하기 위해서 예측된 위치 데이터에 예측 오차분을 반영한 불확실성 영역을 사용한다 그리고 이동체의 불확실성 영역을 설정하기 위하여 최근 예측 오차 가중치 기법과 칼만 필터 기법을 제안하고 이를 기반으로 하는 불확실 위치 처리 기법을 이동체 미래 위치 색인에서 구현하고 성능 비교 평가를 수행한다. 성능 평가 결과에 따르면 기존의 선형함수 기반 예측 기법보다 불확실 위치 처리 기법이 영역 검색의 정확도가 향상되는 장점을 가진다.

  • PDF

An evaluation methodology for cement concrete lining crack segmentation deep learning model (콘크리트 라이닝 균열 분할 딥러닝 모델 평가 방법)

  • Ham, Sangwoo;Bae, Soohyeon;Lee, Impyeong;Lee, Gyu-Phil;Kim, Donggyou
    • Journal of Korean Tunnelling and Underground Space Association
    • /
    • v.24 no.6
    • /
    • pp.513-524
    • /
    • 2022
  • Recently, detecting damages of civil infrastructures from digital images using deep learning technology became a very popular research topic. In order to adapt those methodologies to the field, it is essential to explain robustness of deep learning models. Our research points out that the existing pixel-based deep learning model evaluation metrics are not sufficient for detecting cracks since cracks have linear appearance, and proposes a new evaluation methodology to explain crack segmentation deep learning model more rationally. Specifically, we design, implement and validate a methodology to generate tolerance buffer alongside skeletonized ground truth data and prediction results to consider overall similarity of topology of the ground truth and the prediction rather than pixel-wise accuracy. We could overcome over-estimation or under-estimation problem of crack segmentation model evaluation through using our methodology, and we expect that our methodology can explain crack segmentation deep learning models better.

Dynamic Partitioning Scheme for Large RDF Data in Heterogeneous Environments (이종 환경에서 대용량 RDF 데이터를 위한 동적 분할 기법)

  • Kim, Minsoo;Lim, Jongtae;Bok, Kyoungsoo;Yoo, Jaesoo
    • KIISE Transactions on Computing Practices
    • /
    • v.23 no.10
    • /
    • pp.605-610
    • /
    • 2017
  • In distributed environments, dynamic partitioning is needed to resolve the load on a particular server or the load caused by communication among servers. In heterogeneous environments, existing dynamic partitioning schemes can distribute the same load to a server with a low physical performance, which results in a delayed query response time. In this paper, we propose a dynamic partitioning scheme for large RDF data in heterogeneous environments. The proposed scheme calculates the query loads with its frequency and the number of vertices used in the query for load balancing. In addition, we calculate the server loads by considering the physical performance of the servers to allocate less of a load to the servers with a smaller physical performance in a heterogeneous environment. We perform dynamic partitioning to minimize the number of edge-cuts to reduce the traffic among servers. To show the superiority of the proposed scheme, we compare it with an existing dynamic partitioning scheme through a performance evaluation.

Optimized Polynomial RBF Neural Networks Based on PSO Algorithm (PSO 기반 최적화 다항식 RBF 뉴럴 네트워크)

  • Baek, Jin-Yeol;Oh, Sung-Kwun
    • Proceedings of the KIEE Conference
    • /
    • 2008.07a
    • /
    • pp.1887-1888
    • /
    • 2008
  • 본 논문에서는 퍼지 추론 기반의 다항식 RBF 뉴럴네트워크(Polynomial Radial Basis Function Neural Network; pRBFNN)를 설계하고 PSO(Particle Swarm Optimization) 알고리즘을 이용하여 모델의 파라미터를 동정한다. 제안된 모델은 "IF-THEN" 형식으로 기술되는 퍼지 규칙에 의해 조건부, 결론부, 추론부의 기능적 모듈로 표현된다. 조건부의 입력공간 분할에는 HCM 클러스터링에 기반을 두어 구조가 결정되며, 기존에 주로 사용된 가우시안 함수를 RBF로 이용하고, 원뿔형태의 선형 함수를 제안한다. 또한 입력공간 분할시 데이터 집합의 특성을 반영하기 위해 분포상수를 각 입력마다 고려하여 설계함으로서 공간 분할의 정밀성을 높인다. 결론부에서는 기존 상수항의 연결가중치를 다항식 형태로 표현하는 pRBFNN을 제안한다. 제안한 모델의 성능을 평가하기 위해 Box와 Jenkins가 사용한 가스로 시계열 데이터를 적용하고, 기존 모델과의 근사화와 일반화 능력에 대하여 토의한다.

  • PDF