• Title/Summary/Keyword: 공간클러스터

Search Result 375, Processing Time 0.029 seconds

충주대-청주과학대 통합의 성과와 과제

  • Park, Hong-Yun
    • 대학교육
    • /
    • s.147
    • /
    • pp.84-89
    • /
    • 2007
  • 충주대학교-청주과학대학 간의 통합에 의한 구조개혁은, 대학경쟁력 강화, 교육 여건 개선에 의한 교육의 질 제고, 안정적인 자원 확보와 고등교육의 형평성 제고, 지역산업과의 클러스터 구축에 의한 지역혁신이는 목적으로 추진되었으며, 이에 따른 성과를 거두었다. 그러나 현재 학과 통합과 캠퍼스 간 학과 이전에 따른 공간조정 문제, 형식적인 학부제의 운영, 대학 내의인사, 직원의 인력 조정 및 배치 등이 많은 부분 해소되지 않고 잠재적인 갈등의 상태로 남아 있다.

  • PDF

Splitting policies using trajectory clusters in R-tree based index structures for moving objects databases (이동체 데이터베이스를 위한 R-tree 기반 색인구조에서 궤적 클러스터를 사용한 분할 정책)

  • 김진곤;전봉기;홍봉희
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2003.10b
    • /
    • pp.37-39
    • /
    • 2003
  • 이동체 데이터베이스를 위한 과거 궤적 색인으로 R-tree계열이 많이 사용된다. 그러나 R-tree계열의 색인은 공간 근접성만을 고려하였기 때문에 동일 궤적을 검색하기에는 많은 노드 접근이 필요하다. 이동체 색인의 검색에서 영역 질의와 궤적 질의는 공간 근접성과 궤적 연결성과 같이 상반된 특징으로 인하여 함께 고려되지 않았다. 이동체 색인에서 영역 질의의 성능개선을 위해서는 노드 간의 심한 중복과 사장 공간(Dead Space)을 줄여야 하고, 궤적 질의의 성능 개선을 위해서는 이동체의 궤적 보존이 이루어져야 한다. 이와 같은 요구 조건을 만족하기 위해, 이 논문에서는 R-tree 기반의 색인 구조에서 새로운 분할 정책을 제안한다. 제안하는 색인 구조의 노드 분할 정책은 궤적 클러스터링을 위한 동일 궤적을 그룹화해서 분할하는 공간 축 분할 정책과 공간 활용도를 높이는 시간 축 분할 정책을 제안한다. 본 논문에서는 R-tree기반의 색인 구조에서 변경된 분할 정책을 구현하고, 실험 평가를 수행한다. 이 성능 평가를 통해서 검색성능이 우수함을 보인다.

  • PDF

Performance Enhancement of a DVA-tree by the Independent Vector Approximation (독립적인 벡터 근사에 의한 분산 벡터 근사 트리의 성능 강화)

  • Choi, Hyun-Hwa;Lee, Kyu-Chul
    • The KIPS Transactions:PartD
    • /
    • v.19D no.2
    • /
    • pp.151-160
    • /
    • 2012
  • Most of the distributed high-dimensional indexing structures provide a reasonable search performance especially when the dataset is uniformly distributed. However, in case when the dataset is clustered or skewed, the search performances gradually degrade as compared with the uniformly distributed dataset. We propose a method of improving the k-nearest neighbor search performance for the distributed vector approximation-tree based on the strongly clustered or skewed dataset. The basic idea is to compute volumes of the leaf nodes on the top-tree of a distributed vector approximation-tree and to assign different number of bits to them in order to assure an identification performance of vector approximation. In other words, it can be done by assigning more bits to the high-density clusters. We conducted experiments to compare the search performance with the distributed hybrid spill-tree and distributed vector approximation-tree by using the synthetic and real data sets. The experimental results show that our proposed scheme provides consistent results with significant performance improvements of the distributed vector approximation-tree for strongly clustered or skewed datasets.

Improved FCM Algorithm using Entropy-based Weight and Intercluster (엔트로피 기반의 가중치와 분포크기를 이용한 향상된 FCM 알고리즘)

  • Kwak Hyun-Wook;Oh Jun-Taek;Sohn Young-Ho;Kim Wook-Hyun
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.43 no.4 s.310
    • /
    • pp.1-8
    • /
    • 2006
  • This paper proposes an improved FCM(Fuzzy C-means) algorithm using intercluster and entropy-based weight in gray image. The fuzzy clustering methods have been extensively used in the image segmentation since it extracts feature information of the region. Most of fuzzy clustering methods have used the FCM algorithm. But, FCM algorithm is still sensitive to noise, as it does not include spatial information. In addition, it can't correctly classify pixels according to the feature-based distributions of clusters. To solve these problems, we applied a weight and intercluster to the traditional FCM algorithm. A weight is obtained from the entropy information based on the cluster's number of neighboring pixels. And a membership for one pixel is given based on the information considering the feature-based intercluster. Experiments has confirmed that the proposed method was more tolerant to noise and superior to existing methods.

The Value chain and the Networks of Apparel Industry in Guro-Gasan, Seoul (서울 구로.가산동 의류패션산업의 가치사슬과 네트워크)

  • Lee, Sang Wook;Kim, Kyung-Min
    • Journal of the Economic Geographical Society of Korea
    • /
    • v.17 no.3
    • /
    • pp.465-481
    • /
    • 2014
  • This study is about an apparel industry in Guro-Gasan where is growing up to the one of the apparel industry cluster beyond an agglomeration in Seoul. The purpose of this study is twofold: (1) to define industrial functions and roles of Guro-Gasan in a value chain of the apparel industry; and (2) to understand whether an industrial cluster is built on the local networks. This study reviewed formation and transitions of its local industries and industrial size, characteristics, spatial distribution and spatial properties using GIS analysis and field surveys. Through in-depth interviews, it analyzed the production system and spatial dispersion of the value chain to understand its functions and roles.

  • PDF

Performance Improvement by Cluster Analysis in Korean-English and Japanese-English Cross-Language Information Retrieval (한국어-영어/일본어-영어 교차언어정보검색에서 클러스터 분석을 통한 성능 향상)

  • Lee, Kyung-Soon
    • The KIPS Transactions:PartB
    • /
    • v.11B no.2
    • /
    • pp.233-240
    • /
    • 2004
  • This paper presents a method to implicitly resolve ambiguities using dynamic incremental clustering in Korean-to-English and Japanese-to-English cross-language information retrieval (CLIR). The main objective of this paper shows that document clusters can effectively resolve the ambiguities tremendously increased in translated queries as well as take into account the context of all the terms in a document. In the framework we propose, a query in Korean/Japanese is first translated into English by looking up bilingual dictionaries, then documents are retrieved for the translated query terms based on the vector space retrieval model or the probabilistic retrieval model. For the top-ranked retrieved documents, query-oriented document clusters are incrementally created and the weight of each retrieved document is re-calculated by using the clusters. In the experiment based on TREC test collection, our method achieved 39.41% and 36.79% improvement for translated queries without ambiguity resolution in Korean-to-English CLIR, and 17.89% and 30.46% improvements in Japanese-to-English CLIR, on the vector space retrieval and on the probabilistic retrieval, respectively. Our method achieved 12.30% improvements for all translation queries, compared with blind feedback in Korean-to-English CLIR. These results indicate that cluster analysis help to resolve ambiguity.

A Distributed Layer 7 Server Load Balancing (분산형 레이어 7 서버 부하 분산)

  • Kwon, Hui-Ung;Kwak, Hu-Keun;Chung, Kyu-Sik
    • The KIPS Transactions:PartA
    • /
    • v.15A no.4
    • /
    • pp.199-210
    • /
    • 2008
  • A Clustering based wireless internet proxy server needs a layer-7 load balancer with URL hashing methods to reduce the total storage space for servers. Layer-4 load balancer located in front of server cluster is to distribute client requests to the servers with the same contents at transport layer, such as TCP or UDP, without looking at the content of the request. Layer-7 load balancer located in front of server cluster is to parse client requests in application layer and distribute them to servers based on different types of request contents. Layer 7 load balancer allows servers to have different contents in an exclusive way so that it can minimize the total storage space for servers and improve overall cluster performance. However, its scalability is limited due to the high overhead of parsing requests in application layer as different from layer-4 load balancer. In order to overcome its scalability limitation, in this paper, we propose a distributed layer-7 load balancer by replacing a single layer-7 load balancer in the conventional scheme by a single layer-4 load balancer located in front of server cluster and a set of layer-7 load balancers located at server cluster. In a clustering based wireless internet proxy server, we implemented the conventional scheme by using KTCPVS(Kernel TCP Virtual Server), a linux based layer-7 load balancer. Also, we implemented the proposed scheme by using IPVS(IP Virtual Server), a linux-based layer-4 load balancer, installing KTCPVS in each server, and making them work together. We performed experiments using 16 PCs. Experimental results show scalability and high performance of the proposed scheme, as the number of servers grows, compared to the conventional scheme.

SPQUSAR : A Large-Scale Qualitative Spatial Reasoner Using Apache Spark (SPQUSAR : Apache Spark를 이용한 대용량의 정성적 공간 추론기)

  • Kim, Jongwhan;Kim, Jonghoon;Kim, Incheol
    • KIISE Transactions on Computing Practices
    • /
    • v.21 no.12
    • /
    • pp.774-779
    • /
    • 2015
  • In this paper, we present the design and implementation of a large-scale qualitative spatial reasoner using Apache Spark, an in-memory high speed cluster computing environment, which is effective for sequencing and iterating component reasoning jobs. The proposed reasoner can not only check the integrity of a large-scale spatial knowledge base representing topological and directional relationships between spatial objects, but also expand the given knowledge base by deriving new facts in highly efficient ways. In general, qualitative reasoning on topological and directional relationships between spatial objects includes a number of composition operations on every possible pair of disjunctive relations. The proposed reasoner enhances computational efficiency by determining the minimal set of disjunctive relations for spatial reasoning and then reducing the size of the composition table to include only that set. Additionally, in order to improve performance, the proposed reasoner is designed to minimize disk I/Os during distributed reasoning jobs, which are performed on a Hadoop cluster system. In experiments with both artificial and real spatial knowledge bases, the proposed Spark-based spatial reasoner showed higher performance than the existing MapReduce-based one.

Light Contribution Based Importance Sampling for the Many-Light Problem (다광원 문제를 위한 광원 기여도 기반의 중요도 샘플링)

  • Kim, Hyo-Won;Ki, Hyun-Woo;Oh, Kyoung-Su
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2008.06b
    • /
    • pp.240-245
    • /
    • 2008
  • 컴퓨터 그래픽스에서 많은 광원들을 포함하는 장면을 사실적으로 렌더링하기 위해서는, 많은 양의 조명 계산을 수행해야 한다. 다수의 광원들로부터 빠르게 조명 계산을 하기 위해 많이 사용되는 기법 중에 몬테 카를로(Monte Carlo) 기법이 있다. 본 논문은 이러한 몬테 카를로(Monte Carlo) 기법을 기반으로, 다수의 광원들을 효과적으로 샘플링 할 수 있는 새로운 중요도 샘플링 기법을 제안한다. 제안된 기법의 두 가지 핵심 아이디어는 첫째, 장면 내에 다수의 광원이 존재하여도 어떤 특정 지역에 많은 영향을 주는 광원은 일부인 경우가 많다는 점이고 두 번째는 공간 일관성(spatial coherence)이 낮거나 그림자 경계 지역에 위치한 픽셀들은 영향을 받는 주요 광원이 서로 다르다는 점이다. 제안된 기법은 이러한 관찰에 착안하여 특정 지역에 광원이 기여하는 정도를 평가하고 이에 비례하게 확률 밀도 함수(PDF: Probability Density Function)를 결정하는 방법을 제안한다. 이를 위하여 이미지 공간상에서 픽셀들을 클러스터링(clustering)하고 클러스터 구조를 기반으로 대표 샘플을 선정한다. 선정된 대표 샘플들로부터 광원들의 기여도를 평가하고 이를 바탕으로 클러스터 단위의 확률 밀도 함수를 결정하여 최종 렌더링을 수행한다. 본 논문이 제안하는 샘플링 기법을 적용했을 때 전통적인 샘플링 방식과 비교하여 같은 샘플링 개수에서 노이즈(noise)가 적게 발생하는 좋은 화질을 얻을 수 있었다. 제안된 기법은 다수의 조명과 다양한 재질, 복잡한 가려짐이 존재하는 장면을 효과적으로 표현할 수 있다.

  • PDF

Permitted Limit Setting Method for Data Transmission in Wireless Sensor Network (무선 센서 네트워크에서 데이터 전송 허용범위의 설정 방법)

  • Lee, Dae-hee;Cho, Kyoung-woo;Oh, Chang-heon
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2018.05a
    • /
    • pp.574-575
    • /
    • 2018
  • The generation of redundant data according to the spatial-temporal correlation in a wireless sensor network that reduces the network lifetime by consuming unnecessary energy. In this paper, data collection experiment through the particulate matter sensor is carried out to confirm the spatial-temporal data redundancy and we propose permitted limit setting method for data transmission to solve this problem. In the proposed method, the data transmission permitted limit is set by using the integrated average value in the cluster. The set permitted limit reduces the redundant data of the member node and it is shows that redundant data reduction is possible even in a variable environment of collected data by resetting the permitted limit in the cluster head.

  • PDF