• Title/Summary/Keyword: information granularity

Search Result 124, Processing Time 0.027 seconds

Migration Strategies for Temporal Data based on Time-Segmented Storage Structure

  • Yun, Hongwon
    • Proceedings of the IEEK Conference
    • /
    • 2000.07a
    • /
    • pp.329-332
    • /
    • 2000
  • Research interests on temporal data have been almost focused on data models. There has been relatively less research in the area of temporal data management. In this paper, we propose two data migration strategies based on time-segmented storage structure: the migration strategy by Time Granularity, the migration strategy by LST-GET. We describe the criterion for data migration and moving process. We simulated the performance of the migration strategy by Time Granularity in order to compare it with non-segmentation method. We compared and analyzed two data migration strategies for temporal data.

  • PDF

Services Identification based on Use Case Recomposition (유스케이스 재구성을 통한 서비스 식별)

  • Kim, Yu-Kyong
    • The Journal of Society for e-Business Studies
    • /
    • v.12 no.4
    • /
    • pp.145-163
    • /
    • 2007
  • Service-Oriented Architecture is a style of information systems that enables the creation of applications that are built by combining loosely coupled and interoperable services. A service is an implementation of business functionality with proper granularity and invoked with well-defined interface. In service modeling, when the granularity of a service is finer, the reusability and flexibility of the service is lower. For solving this problem concerns with the service granularity, it is critical to identify and define coarse-grained services from the domain analysis model. In this paper, we define the process to identify services from the Use Case model elicited from domain analysis. A task tree is derived from Use Cases and their descriptions, and Use Cases are reconstructed by the composition and decomposition of the task tree. Reconstructed Use Cases are defined and specified as services. Because our method is based on the widely used UML Use Case models, it can be helpful to minimize time and cost for developing services in various platforms and domains.

  • PDF

Performance Improvement of Parallel Processing System through Runtime Adaptation (실행시간 적응에 의한 병렬처리시스템의 성능개선)

  • Park, Dae-Yeon;Han, Jae-Seon
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.26 no.7
    • /
    • pp.752-765
    • /
    • 1999
  • 대부분 병렬처리 시스템에서 성능 파라미터는 복잡하고 프로그램의 수행 시 예견할 수 없게 변하기 때문에 컴파일러가 프로그램 수행에 대한 최적의 성능 파라미터들을 컴파일 시에 결정하기가 힘들다. 본 논문은 병렬 처리 시스템의 프로그램 수행 시, 변화하는 시스템 성능 상태에 따라 전체 성능이 최적화로 적응하는 적응 수행 방식을 제안한다. 본 논문에서는 이 적응 수행 방식 중에 적응 프로그램 수행을 위한 이론적인 방법론 및 구현 방법에 대해 제안하고 적응 제어 수행을 위해 프로그램의 데이타 공유 단위에 대한 적응방식(적응 입도 방식)을 사용한다. 적응 프로그램 수행 방식은 프로그램 수행 시 하드웨어와 컴파일러의 도움으로 프로그램 자신이 최적의 성능을 얻을 수 있도록 적응하는 방식이다. 적응 제어 수행을 위해 수행 시에 병렬 분산 공유 메모리 시스템에서 프로세서 간 공유될 수 있은 데이타의 공유 상태에 따라 공유 데이타의 크기를 변화시키는 적응 입도 방식을 적용했다. 적응 입도 방식은 기존의 공유 메모리 시스템의 공유 데이타 단위의 통신 방식에 대단위 데이타의 전송 방식을 사용자의 입장에 투명하게 통합한 방식이다. 시뮬레이션 결과에 의하면 적응 입도 방식에 의해서 하드웨어 분산 공유 메모리 시스템보다 43%까지 성능이 개선되었다. Abstract On parallel machines, in which performance parameters change dynamically in complex and unpredictable ways, it is difficult for compilers to predict the optimal values of the parameters at compile time. Furthermore, these optimal values may change as the program executes. This paper addresses this problem by proposing adaptive execution that makes the program or control execution adapt in response to changes in machine conditions. Adaptive program execution makes it possible for programs to adapt themselves through the collaboration of the hardware and the compiler. For adaptive control execution, we applied the adaptive scheme to the granularity of sharing adaptive granularity. Adaptive granularity is a communication scheme that effectively and transparently integrates bulk transfer into the shared memory paradigm, with a varying granularity depending on the sharing behavior. Simulation results show that adaptive granularity improves performance up to 43% over the hardware implementation of distributed shared memory systems.

Extended Forecasts of a Stock Index using Learning Techniques : A Study of Predictive Granularity and Input Diversity

  • Kim, Steven H.;Lee, Dong-Yun
    • Asia pacific journal of information systems
    • /
    • v.7 no.1
    • /
    • pp.67-83
    • /
    • 1997
  • The utility of learning techniques in investment analysis has been demonstrated in many areas, ranging from forecasting individual stocks to entire market indexes. To date, however, the application of artificial intelligence to financial forecasting has focused largely on short predictive horizons. Usually the forecast window is a single period ahead; if the input data involve daily observations, the forecast is for one day ahead; if monthly observations, then a month ahead; and so on. Thus far little work has been conducted on the efficacy of long-term prediction involving multiperiod forecasting. This paper examines the impact of alternative procedures for extended prediction using knowledge discovery techniques. One dimension in the study involves temporal granularity: a single jump from the present period to the end of the forecast window versus a web of short-term forecasts involving a sequence of single-period predictions. Another parameter relates to the numerosity of input variables: a technical approach involving only lagged observations of the target variable versus a fundamental approach involving multiple variables. The dual possibilities along each of the granularity and numerosity dimensions entail a total of 4 models. These models are first evaluated using neural networks, then compared against a multi-input jump model using case based reasoning. The computational models are examined in the context of forecasting the S&P 500 index.

  • PDF

Batch Resizing Policies and Techniques for Fine-Grain Grid Tasks: The Nuts and Bolts

  • Muthuvelu, Nithiapidary;Chai, Ian;Chikkannan, Eswaran;Buyya, Rajkumar
    • Journal of Information Processing Systems
    • /
    • v.7 no.2
    • /
    • pp.299-320
    • /
    • 2011
  • The overhead of processing fine-grain tasks on a grid induces the need for batch processing or task group deployment in order to minimise overall application turnaround time. When deciding the granularity of a batch, the processing requirements of each task should be considered as well as the utilisation constraints of the interconnecting network and the designated resources. However, the dynamic nature of a grid requires the batch size to be adaptable to the latest grid status. In this paper, we describe the policies and the specific techniques involved in the batch resizing process. We explain the nuts and bolts of these techniques in order to maximise the resulting benefits of batch processing. We conduct experiments to determine the nature of the policies and techniques in response to a real grid environment. The techniques are further investigated to highlight the important parameters for obtaining the appropriate task granularity for a grid resource.

Concepts and Design Aspects of Granular Models of Type-1 and Type-2

  • Pedrycz, Witold
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.15 no.2
    • /
    • pp.87-95
    • /
    • 2015
  • In this study, we pursue a new direction for system modeling by introducing the concept of granular models, which produce results in the form of information granules (such as intervals, fuzzy sets, and rough sets). We present a rationale and several key motivating arguments behind the use of granular models and discuss their underlying design processes. The development of the granular model includes optimal allocation of information granularity through optimizing the criteria of coverage and specificity. The emergence and construction of granular models of type-2 and type-n (in general) is discussed. It is shown that achieving a suitable coverage-specificity tradeoff (compromise) is essential for developing granular models.

Content Modeling Based on Social Network Community Activity

  • Kim, Kyung-Rog;Moon, Nammee
    • Journal of Information Processing Systems
    • /
    • v.10 no.2
    • /
    • pp.271-282
    • /
    • 2014
  • The advancement of knowledge society has enabled the social network community (SNC) to be perceived as another space for learning where individuals produce, share, and apply content in self-directed ways. The content generated within social networks provides information of value for the participants in real time. Thus, this study proposes the social network community activity-based content model (SoACo Model), which takes SNC-based activities and embodies them within learning objects. The SoACo Model consists of content objects, aggregation levels, and information models. Content objects are composed of relationship-building elements, including real-time, changeable activities such as making friends, and participation-activity elements such as "Liking" specific content. Aggregation levels apply one of three granularity levels considering the reusability of elements: activity assets, real-time, changeable learning objects, and content. The SoACo Model is meaningful because it transforms SNC-based activities into learning objects for learning and teaching activities and applies to learning management systems since they organize activities -- such as tweets from Twitter -- depending on the teacher's intention.

A Framework for Concurrency Control and Writing Authority Control in Collaborative Writing Systems (공동저작 시스템에서의 동시성 제어와 쓰기 권한 제어)

  • Yoo, Jae-Hong;Sung, Mee-Young
    • The Transactions of the Korea Information Processing Society
    • /
    • v.7 no.2
    • /
    • pp.347-354
    • /
    • 2000
  • This paper presents the efficient mechanisms for concurrency control and writing authority control in collaborative writing system are represented by the tree structures which consist of the logical objects and the content objects connected to the terminal objects of trees. For concurrency control, we adopted the approach to extend the multiple-granularity-locking-scheme. This scheme allows us to lock any objects at each level of the hierarchy. We also defined the locking compatibility table by analysing the operations applicable to any objects at each level of the hierarchy. We finally suggest the extended-multiple-granularity-locking mechanism which uses the locking compatibilility table for deciding to lock an object. This scheme gives the benefit to maximize the possibility of concurrent accessing to the shared objects. In addition, we suggest a mechanism for writing authority control which prohibits the Non-Group users from modifying the shared objects based on the concept of Group/Non-Group The proposed mechanism allows us to protect copyright very reasonably.

  • PDF

A Study on the Contour-Preserving Image Filtering for Noise Removal (잡음 제거를 위한 윤곽선 보존 기법에 관한 연구)

  • Yoo, Choong-Woong;Ryu, Dae-Hyun;Bae, Kang-Yeul
    • Journal of the Korean Institute of Telematics and Electronics T
    • /
    • v.36T no.4
    • /
    • pp.24-29
    • /
    • 1999
  • In this paper, a simple contour-preserving filtering algorithm is proposed. The goal of the contour-preserving filtering method is to remove noise ad granularity as the preprocessing for the image segmentation procedure. Our method finds edge map and separates the image into the edge region and the non-edge region using this edge map. For the non-edge region, typical smoothing filters could be used to remove the noise and the small areas during the segmentation procedure. The result of simulation shows that our method is slightly better than the typical methods such as the median filtering and gradient inverse weighted filtering in the point of view of analysis of variance (ANOVA).

  • PDF

A Study on Construction of Granular Concept Hierarchies based Granularity Level (입자화 정도를 기반으로 하는 개념계층구조의 구축)

  • Kang, Yu-Kyung;Hwang, Suk-Hyung
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2011.04a
    • /
    • pp.1542-1545
    • /
    • 2011
  • 형식개념분석기법(FCA : Formal Concept Analysis)은 주어진 데이터로부터 공통속성을 갖는 객체들을 클러스터링하여 정보의 최소단위로써 개념(Concept)들을 추출하고 그들 사이의 관계를 토대로 계층화하여 데이터에 내재된 개념들의 구조를 가시화 해주는 Granular Computing의 한 종류이다. 형식 개념분석기법에서는 공통속성을 갖는 객체들을 추출한다는 전제조건을 토대로 개념을 추출하기 때문에 다양한 상황이나 조건에 적합한 새로운 개념들을 추출하기에는 한계가 있다. 이와 같은 문제를 해결하기 위한 한 가지 방법으로써, 본 논문에서는 입자화 정도(granularity level)를 기반으로 하는 형식 개념분석기법을 제안한다. 본 논문에서 제안하는 기법에서는 형식개념분석기법에 입자화 정도를 도입하여 다양한 조건과 추상화 수준을 토대로 하여, 개념들을 추출하고 개념계층구조를 구축할 수 있다.