• Title/Summary/Keyword: Heterogeneous storage

Search Result 104, Processing Time 0.033 seconds

The Method of Data Synchronization Among Devices for Personal Cloud Services (퍼스널 클라우드 서비스를 위한 임의의 단말간 컨텐츠 동기화 방법)

  • Choi, Eunjeong;Lee, Jeunwoo
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.6 no.6
    • /
    • pp.377-382
    • /
    • 2011
  • This paper describes the method of data synchronization among devices for personal cloud services. Existing data synchronization for mobile devices is based on a central server to mobile devices or a PC to a mobile device. However, the purpose of this paper is to share user data in heterogeneous environments, without depending on central server. This technology can be applied to synchronize personal data between a device and a personal cloud storage for personal cloud services. The ad hoc synchronization needs a sync agent service discovery module, a user authentication module, a network adapter, and an application data synchronization module. The method described in this paper is better than existing synchronization technology based on client-server in availability, performance, scalability quality attributes.

Trust Evaluation Metrics for Selecting the Optimal Service on SOA-based Internet of Things

  • Kim, Yukyong
    • Journal of Software Assessment and Valuation
    • /
    • v.15 no.2
    • /
    • pp.129-140
    • /
    • 2019
  • In the IoT environment, there is a huge amount of heterogeneous devices with limited capacity. Existing trust evaluation methods are not adequate to accommodate this requirement due to the limited storage space and computational resources. In addition, since IoT devices are mainly human operated devices, the trust evaluation should reflect the social relations among device owners. There is also a need for a mechanism that reflects the tendency of the trustor and environmental factors. In this paper, we propose an adaptable trust evaluation method for SOA-based IoT system to deal with these issues. The proposed model is designed to minimize the confidence bias and to dynamically respond to environmental changes by combining direct evaluation and indirect evaluation. It is expected that it will be possible to secure trust through quantitative evaluation by providing feedback based on social relationships.

An Efficient Transaction Management on HVEM DataGrid (HVEM 데이터그리드 상의 효율적인 트랜잭션 관리)

  • Jung, Im Y.;Kim, Eunsung;Choi, Hyung Jun;Yeom, Heon Y.
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2007.11a
    • /
    • pp.637-639
    • /
    • 2007
  • HVEM DataGrid[1]는 연구자들이 초고압 전자현미경(HVEM-High Voltage Electron Microscope)을 이용한 실험 결과를 효율적으로 관리하고 공유하기 위한 공간이다. 여러 사람들이 동시에 이용하는 시스템으로 이종 저장소(heterogeneous storage)들을 포함할 수 있는 HVEM DataGrid 는 HVEM 실험결과를 이들의 메타정보와 같이 동기적으로 저장해야 한다. 이런 HVEM DataGrid 의 특성을 고려한 트랜잭션 관리는 트랜잭션의 ACID 성질을 만족하는 동시에 용량이 큰 e-Science 결과물을 효율적으로 다룰 수 있는 방안이 또한 필요하다. 따라서, 본 논문은 HVEM DataGrid 의 이종 저장소에 걸친 트랜잭션에 대한 효율적인 관리 방안을 제안한다.

Determination of Aqnifer Characteristics from Specific Capacity Data of Wells in Cheju Island (제주도 지하수의 우물 비양수량자료를 이용한 대수층상수 결정방법)

  • 최병수
    • Journal of the Korean Society of Groundwater Environment
    • /
    • v.6 no.4
    • /
    • pp.180-187
    • /
    • 1999
  • Transmissivity is often estimated from specific capacity data because of the expense of conducting standard aquifer test to obtain transmissivity and the relative availability of specific capacity data. Most often, analytic expression relating specific capacity to transmissivity derived by Theis (1963). Brown (1963). and Logan (1964) are used in this analysis. The analytic solution typically used to predict transmissivity from specific capacity in alluvial aquifers assuming influence radius and/or storage coefficient of the aquifers. But those do not agree well with the measured transmissivity in fractured rock aquifers and in heterogeneous aquifers. Razack-Huntely (199l). Huntely-Steffey (1992). and Mace (1997) proposed emphirical rotations between specific capacity and transmissivity in heterogeneous alluvial aquifers. fractured rock aquifers, and karst aquifers. This study focuses on comparison between transmissivity and specific capacity data in volcanic rock aquifers of Jeju Island. Emphirical relation between the log of transmissivity and the log of specific capacity suggests they no linearly related (correlation coefficient 0.951) and the width of $\pm$0.25 log cycles in transmissivity includes 96.6% of data.

  • PDF

An Efficient Method for Determining Work Process Number of Each Node on Computation Grid (계산 그리드 상에서 각 노드의 작업 프로세스 수를 결정하기 위한 효율적인 방법)

  • Kim Young-Hak;Cho Soo-Hyun
    • The Journal of the Korea Contents Association
    • /
    • v.5 no.1
    • /
    • pp.189-199
    • /
    • 2005
  • The grid computing is a technique to solve big problems such as a field of scientific technique by sharing the computing power and a big storage space of the numerous computers on the distributed network. The environment of the grid computing is composed with the WAN which has a different performance and a heterogeneous network condition. Therefore, it is more important to reflect heterogeneous performance elements to calculation work. In this paper, we propose an efficient method that decides work process number of each node by considering a network state information. The network state information considers the latency, the bandwidth and latency-bandwidth mixture information. First, using information which was measured, we compute the performance ratio and decide work process number of each node. Finally, RSL file was created automatically based on work process number which was decided, and then accomplishes a work. The network performance information is collected by the NWS. According to experimental results, the method which was considered of network performance information is improved respectively 23%, 31%, and 57%, compared to the methods of existing in a viewpoint of work amount, work process number, and node number.

  • PDF

A Data Manager for a Vehicle Control System (이동체 통제 시스템을 위한 데이타 관리자)

  • Han, Jae-Joon;Han, Ki-Joon
    • Journal of Korean Society for Geospatial Information Science
    • /
    • v.4 no.1 s.6
    • /
    • pp.103-114
    • /
    • 1996
  • A vehicle Control System that is a system combined by a Car Navigation System, a Geograhical Information System, and a Communication Network usually permit a central control room to detect vehicle's locations dynamically and manage these informations synthetically. Therefore, a vehicle control system can be used to control the rapid movement of patrol cars, freight transportation, and so on. In this paper, we designed and implemented a data manager for an application system, such as a vehicle control system, that usually manages road data, vehicle data, vehicle's location data, etc. The data manager that is implemented in this paper consists of system management module, road data management module, vehicle data management module, GPS(Global Positioning System) data management module, and additional data management module. Especially, we use the SHORE(Scalable Heterogeneous Object REpository) system as a storage system of the data manager.

  • PDF

Encapsulation of SEED Algorithm in HCCL for Selective Encryption of Android Sensor Data (안드로이드 센서 정보의 선택적 암호화를 지원하는 HCCL 기반 SEED 암호의 캡슐화 기능 연구)

  • Kim, Hyung Jong;Ahn, Jae Yoon
    • Journal of the Korea Society for Simulation
    • /
    • v.29 no.2
    • /
    • pp.73-81
    • /
    • 2020
  • HCCL stands for Heterogenous Container Class Library. HCCL is a library that allows heterogeneous types of data to be stored in a container as a single record and to be constructed as a list of the records to be stored in database. With HCCL, encryption/decryption can be done based on the unified data type. Recently, IoT sensor which is embedded in smartphone enables developers to provide various convenient services to users. However, it is also true that infringement of personal information may occur in the process of transmitting sensor information to API and users need to be prepared for this situation in some sense. In this study, we developed a data model that enhances existing security using SEED cryptographic algorithms while managing information of sensors based on HCCL. Due to the fact that the Android environment does not provide permission management function for sensors, this study decided whether or not to encrypt sensor information based on the user's choice so that the user can determine the creation and storage of safe data. For verification of this work, we have presented the performance evaluation by comparing with the situation of storing the sensor data in plaintext.

A Real-Time Stock Market Prediction Using Knowledge Accumulation (지식 누적을 이용한 실시간 주식시장 예측)

  • Kim, Jin-Hwa;Hong, Kwang-Hun;Min, Jin-Young
    • Journal of Intelligence and Information Systems
    • /
    • v.17 no.4
    • /
    • pp.109-130
    • /
    • 2011
  • One of the major problems in the area of data mining is the size of the data, as most data set has huge volume these days. Streams of data are normally accumulated into data storages or databases. Transactions in internet, mobile devices and ubiquitous environment produce streams of data continuously. Some data set are just buried un-used inside huge data storage due to its huge size. Some data set is quickly lost as soon as it is created as it is not saved due to many reasons. How to use this large size data and to use data on stream efficiently are challenging questions in the study of data mining. Stream data is a data set that is accumulated to the data storage from a data source continuously. The size of this data set, in many cases, becomes increasingly large over time. To mine information from this massive data, it takes too many resources such as storage, money and time. These unique characteristics of the stream data make it difficult and expensive to store all the stream data sets accumulated over time. Otherwise, if one uses only recent or partial of data to mine information or pattern, there can be losses of valuable information, which can be useful. To avoid these problems, this study suggests a method efficiently accumulates information or patterns in the form of rule set over time. A rule set is mined from a data set in stream and this rule set is accumulated into a master rule set storage, which is also a model for real-time decision making. One of the main advantages of this method is that it takes much smaller storage space compared to the traditional method, which saves the whole data set. Another advantage of using this method is that the accumulated rule set is used as a prediction model. Prompt response to the request from users is possible anytime as the rule set is ready anytime to be used to make decisions. This makes real-time decision making possible, which is the greatest advantage of this method. Based on theories of ensemble approaches, combination of many different models can produce better prediction model in performance. The consolidated rule set actually covers all the data set while the traditional sampling approach only covers part of the whole data set. This study uses a stock market data that has a heterogeneous data set as the characteristic of data varies over time. The indexes in stock market data can fluctuate in different situations whenever there is an event influencing the stock market index. Therefore the variance of the values in each variable is large compared to that of the homogeneous data set. Prediction with heterogeneous data set is naturally much more difficult, compared to that of homogeneous data set as it is more difficult to predict in unpredictable situation. This study tests two general mining approaches and compare prediction performances of these two suggested methods with the method we suggest in this study. The first approach is inducing a rule set from the recent data set to predict new data set. The seocnd one is inducing a rule set from all the data which have been accumulated from the beginning every time one has to predict new data set. We found neither of these two is as good as the method of accumulated rule set in its performance. Furthermore, the study shows experiments with different prediction models. The first approach is building a prediction model only with more important rule sets and the second approach is the method using all the rule sets by assigning weights on the rules based on their performance. The second approach shows better performance compared to the first one. The experiments also show that the suggested method in this study can be an efficient approach for mining information and pattern with stream data. This method has a limitation of bounding its application to stock market data. More dynamic real-time steam data set is desirable for the application of this method. There is also another problem in this study. When the number of rules is increasing over time, it has to manage special rules such as redundant rules or conflicting rules efficiently.

Efficient Publishing Spatial Information as GML for Interoperability of Heterogeneous Spatial Database Systems (이질적인 공간정보시스템의 상호 운용성을 위한 효과적인 지리데이터의 GML 사상)

  • 정원일;배해영
    • Journal of Korea Multimedia Society
    • /
    • v.7 no.1
    • /
    • pp.12-26
    • /
    • 2004
  • In the past, geographic data is constructed and serviced through independent formats of its own according to each GIS(Geographic Information System). Recently the provision of interoperability in GIS is important to efficiently apply the various geographic data between conventional GIS's. Whereupon OGC(Open GIS Consortium) proposed GML(Geography Markup Language) to offer the interoperability between heterogeneous GISs in distributed environments. The GML is an XML encoding for the transport and storage of geographic information, including both the spatial and non-spatial properties of geographic features. Also, the GML includes Web Map Server Implementation Specification to service the GML documents. Accordingly the prototype to provide the reciprocal interchange of geographic information between conventional GIS's and GML documents is widely studied. In this paper, we propose a mapping method of geographic in formation between spatial database and GML for the prototype to support the interoperability between heterogeneous geographic information. For this method, firstly the scheme of converting geographic in Formation of the conventional spatial database into the GML document according to the GML specification is explained, and secondly the scheme to transform geographic information of GML documents to geographic data of spatial database is showed. Consequently, the proposed method is applicable to the framework for integrated geographic information services based on Web by making an offer the interoperability between already built geographic information of conventional GIS's using a mapping method of geographic information between spatial database and GML.

  • PDF

A change Management Model for BPM Documents in e-Business Environments (e-Business를 위한 BPM 문서 변경관리 모델)

  • 배혜림;조재균;정석찬;박기남
    • The Journal of Society for e-Business Studies
    • /
    • v.8 no.3
    • /
    • pp.87-105
    • /
    • 2003
  • Business Process Management (BPM) is an emerging trend of managing business process life cycles and integrating heterogeneous systems . Recently, BPM is considered as an essential element for automation of complex business processes involving many companies, particularly for those in an e-Business environment. In such an environment, it is very important for each business partner to trace history of resources. However, it has been a difficult problem for Workflow Management Systems(WFMSs) to support management of resource changes because of limited storage, complex process structure, and absence of formal change model. In this paper. a new framework is proposed, which can support change management for documents, one of core resources of business process. Under a traditional WFMSs framework, all workflow components belong to either build-time or run-time function, and executions of workflow processes are performed by the two functions . To manage changes of documents while process executing. a new framework with additional component modules are required. A version control method is introduced for the purpose of managing document changes. The proposed method includes five models for document structure, process structure, association between process and document, version management, and efficient version storage. A prototype system is developed to demonstrate the effectiveness of the proposed models.

  • PDF