• 제목/요약/키워드: distributed storage systems

검색결과 207건 처리시간 0.031초

미래 홈 멀티미디어 가전을 위한 디지털 컨버젼스 플랫폼 구현 (Implementation of a Digital Convergence Platform for Future Home Multimedia Appliances)

  • 오화용;김동환;이은서;장태규
    • 대한전자공학회:학술대회논문집
    • /
    • 대한전자공학회 2005년도 추계종합학술대회
    • /
    • pp.983-986
    • /
    • 2005
  • This paper describes a digital convergence platform(DCP) whice is implemented based on the MPEG-21 multimedia framework. The DCP is a newly proposed solution in this research for the convergence service of future home multimedia environment. The DCP is a common platform designed to have the feature of configurability, via means of S/W, which is needed for the convergence service of diverse digital media. A distributed peer to peer service and transaction model is also a new feature realized in the DCP using the MPEG-21 multimedia framework. A prototype DCP is implemented to verify its functions of multimedia service and transactions. The developed DCPs are networked with IP clustering storage systems for the distributed service of multimedia. Successful streaming services of the MPEG-2/4 video and audio are verified with the implemented test-bed system of the DCP.

  • PDF

분산전원의 구성 및 출력 제어 방법에 따른 Droop 계수 설정 방법 (A Method to Determine the Droop Constant of DGs Considering the Configuration and Active Power Control Mode)

  • 안선주;박진우;정일엽;문승일
    • 전기학회논문지
    • /
    • 제57권11호
    • /
    • pp.1954-1961
    • /
    • 2008
  • Microgrid usually consists of a cluster of distributed generators(DGs), energy storage systems and loads, and can operate in the grid-connected mode and the islanded mode. This paper presents detailed descriptions of two different options for controlling the active power of DGs in the microgrid. One is regulating the power injected by the unit to a desired amount(Unit output power control) and the other is to regulate the flow of active power in the feeder where the unit is installed to a constant(Feeder flow control). Frequency-droop characteristics are used to achieve good active power sharing when the microgrid operates in the islanded mode. The change in the frequency and the active power output of DGs are investigated according to the control mode and the configuration of DGs when the microgrid is disconnected from the main grid. From the analysis, this paper proposes a method to determine the droop constant of DGs operating in the feeder flow control mode. Simulation results using the PSCAD/EMTDC are presented to validate the approach, which shows good performance as opposed to the conventional one.

Microgrid energy scheduling with demand response

  • Azimian, Mahdi;Amir, Vahid;Haddadipour, Shapour
    • Advances in Energy Research
    • /
    • 제7권2호
    • /
    • pp.85-100
    • /
    • 2020
  • Distributed energy resources (DERs) are essential for coping with growing multiple energy demands. A microgrid (MG) is a small-scale version of the power system which makes possible the integration of DERs as well as achieving maximum demand-side management utilization. Hence, this study focuses on the analysis of optimal power dispatch considering economic aspects in a multi-carrier microgrid (MCMG) with price-responsive loads. This paper proposes a novel time-based demand-side management in order to reshape the load curve, as well as preventing the excessive use of energy in peak hours. In conventional studies, energy consumption is optimized from the perspective of each infrastructure user without considering the interactions. Here, the interaction of energy system infrastructures is considered in the presence of energy storage systems (ESSs), small-scale energy resources (SSERs), and responsive loads. Simulations are performed using GAMS (General Algebraic modeling system) to model MCMG, which are connected to the electricity, natural gas, and district heat networks for supplying multiple energy demands. Results show that the simultaneous operation of various energy carriers, as well as utilization of price-responsive loads, lead to better MCMG performance and decrease operating costs for smart distribution grids. This model is examined on a typical MCMG, and the effectiveness of the proposed model is proven.

도커 기반의 실시간 데이터 연계 및 처리 환경을 고려한 빅데이터 관리 플랫폼 개발 (Development of Big-data Management Platform Considering Docker Based Real Time Data Connecting and Processing Environments)

  • 김동길;박용순;정태윤
    • 대한임베디드공학회논문지
    • /
    • 제16권4호
    • /
    • pp.153-161
    • /
    • 2021
  • Real-time access is required to handle continuous and unstructured data and should be flexible in management under dynamic state. Platform can be built to allow data collection, storage, and processing from local-server or multi-server. Although the former centralize method is easy to control, it creates an overload problem because it proceeds all the processing in one unit, and the latter distributed method performs parallel processing, so it is fast to respond and can easily scale system capacity, but the design is complex. This paper provides data collection and processing on one platform to derive significant insights from various data held by an enterprise or agency in the latter manner, which is intuitively available on dashboards and utilizes Spark to improve distributed processing performance. All service utilize dockers to distribute and management. The data used in this study was 100% collected from Kafka, showing that when the file size is 4.4 gigabytes, the data processing speed in spark cluster mode is 2 minute 15 seconds, about 3 minutes 19 seconds faster than the local mode.

A Four-Layer Robust Storage in Cloud using Privacy Preserving Technique with Reliable Computational Intelligence in Fog-Edge

  • Nirmala, E.;Muthurajkumar, S.
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제14권9호
    • /
    • pp.3870-3884
    • /
    • 2020
  • The proposed framework of Four Layer Robust Storage in Cloud (FLRSC) architecture involves host server, local host and edge devices in addition to Virtual Machine Monitoring (VMM). The goal is to protect the privacy of stored data at edge devices. The computational intelligence (CI) part of our algorithm distributes blocks of data to three different layers by partially encoded and forwarded for decoding to the next layer using hash and greed Solomon algorithms. VMM monitoring uses snapshot algorithm to detect intrusion. The proposed system is compared with Tiang Wang method to validate efficiency of data transfer with security. Hence, security is proven against the indexed efficiency. It is an important study to integrate communication between local host software and nearer edge devices through different channels by verifying snapshot using lamport mechanism to ensure integrity and security at software level thereby reducing the latency. It also provides thorough knowledge and understanding about data communication at software level with VMM. The performance evaluation and feasibility study of security in FLRSC against three-layered approach is proven over 232 blocks of data with 98% accuracy. Practical implications and contributions to the growing knowledge base are highlighted along with directions for further research.

Spatio-Temporal Query Processing Over Sensor Networks: Challenges, State Of The Art And Future Directions

  • Jabeen, Farhana;Nawaz, Sarfraz;Tanveer, Sadaf;Iqbal, Majid
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제6권7호
    • /
    • pp.1756-1776
    • /
    • 2012
  • Wireless sensor networks (WSNs) are likely to be more prevalent as their cost-effectiveness improves. The spectrum of applications for WSNs spans multiple domains. In environmental sciences, in particular, they are on the way to become an essential technology for monitoring the natural environment and the dynamic behavior of transient physical phenomena over space. Existing sensor network query processors (SNQPs) have also demonstrated that in-network processing is an effective and efficient means of interaction with WSNs for performing queries over live data. Inspired by these findings, this paper investigates the question as to whether spatio-temporal and historical analysis can be carried over WSNs using distributed query-processing techniques. The emphasis of this work is on the spatial, temporal and historical aspects of sensed data, which are not adequately addressed in existing SNQPs. This paper surveys the novel approaches of storing the data and execution of spatio-temporal and historical queries. We introduce the challenges and opportunities of research in the field of in-network storage and in-network spatio-temporal query processing as well as illustrate the current status of research in this field. We also present new areas where the spatio-temporal and historical query processing can be of significant importance.

우리나라 해안사구 분포 현황과 기능특성에 관한 고찰 (Review of the Functional Properties and Spatial Distribution of Coastal Sand Dunes in South Korea)

  • 윤한삼;박소영;유창일
    • 수산해양교육연구
    • /
    • 제22권2호
    • /
    • pp.180-194
    • /
    • 2010
  • Coastal sand dunes are dynamic and fragile buffer zones of sand and vegetation where the following three characteristics can be found: large quantities of sand, persistent wind capable of moving sand, and suitable locations for sand to accumulate. The functional properties of coastal sand dunes include the roles in sand storage, underground freshwater storage, coastal defense, and ecological environment space, among others. Recently, however, the integrity of coastal dune systems has been threatened by development, including sand extraction for the construction industry, military usage, conversion to golf courses, the building of seawalls and breakwaters, and recreational facility development. In this paper, we examined the development mechanisms and structural/format types of coastal sand dunes, as well as their functions and value from the perspective of coastal engineering based on reviews of previous researches and a case study of a small coastal sand dune in the Nakdong river estuary. Existing data indicate that there are a total of 133 coastal sand dunes in South Korea, 43 distributed on the East Sea coast (32 in the Gangwon area, and 11 in Gyeongsangbuk-do), 60 on the West Sea coast (4 in Incheon and Gyeonggi-do, 42 in Ghungcheongnam-do, 9 in Jellabuk-do, and 5 in Jellanam-do), and 30 on the South Sea coast (16 in Jellanam-do, 2 in Gyeongsangnam-do, and 12 in Jeju).

An MCFQ I/O Scheduler Considering Virtual Machine Bandwidth Distribution

  • Park, Jung Kyu
    • 한국컴퓨터정보학회논문지
    • /
    • 제20권10호
    • /
    • pp.91-97
    • /
    • 2015
  • In this paper, we propose a MCFQ I/O scheduler that is implemented by modifying the existing Linux CFQ I/O scheduler. MCFQ observes whether the user requested I/O bandwidth weight is well distributed. Based on the I/O bandwidth observation, we improved I/O performance of the existing bandwidth distribution ability by dynamically controlling the I/O time-slice of the virtual machine. The use of SSDs as storage has been increasing dramatically in recent computer systems due to their fast performance and low power usage. As the usage of SSD increases and prices fall, virtualized system administrators can take advantage of SSDs. However, studies on guaranteeing SLA(Service Level Agreement) services when multiple virtual machines share the SSD is still incomplete. In this paper was conducted to improve performance of the bandwidth distribution when multiple virtual machine are sharing a single SSD storage in a virtualized environment. In particular, it was observed that the performance of the bandwidth distribution varied widely when garbage collection occurs in the SSD. In order to reduce performance variance, we add a MoTS(Manager of Time Slice) on existing CFQ I/O scheduler.

A Study on the IDL Compiler using the Marshal Buffer Management

  • Kim, Dong-Hyun
    • 한국정보통신학회:학술대회논문집
    • /
    • 한국해양정보통신학회 2005년도 춘계종합학술대회
    • /
    • pp.843-847
    • /
    • 2005
  • The development of distributed application in the standardized CORBA(Common Object Request Broker Architecture) environments reduces the developing time and maintaining cost of the systems. Because of these advantages, the development of application is being progressed in the several fields using the CORBA environments. The programmers in the CORBA environments usually develop the application programs using the CORBA IDL(Interface Definition Language). The IDL files are compiled by IDL compiler and translated into the stubs and skeleton codes which are mapped onto particular target language. The stubs produced by IDL compilers processes the marshaling a data into message buffer. Before a stub can marshal a data into its message buffer, the stub must ensure that the buffer has at least enough free space to contain the encoded representation of the data. But, the stubs produced by typical IDL compilers check the amount of free buffer space before every atomic data is marshaled, and if necessary, expand the message buffer. These repeated tests are wasteful and incidence of overheads, especially if the marshal buffer space must be continually expanded. Thus, the performance of the application program may be poor. In this paper, we suggest the way that the stub code is maintain the enough free space before marshaling the data into message buffer. This methods were analyzes the overall storage requirements of every message that will be exchanged between client and server. For these analysis, in the Front End of compiler has maintain the information that the storage requirements and alignment constraints for data types. Thus, stub code is optimized and the performance of application program is increased.

  • PDF

An Adaptive Workflow Scheduling Scheme Based on an Estimated Data Processing Rate for Next Generation Sequencing in Cloud Computing

  • Kim, Byungsang;Youn, Chan-Hyun;Park, Yong-Sung;Lee, Yonggyu;Choi, Wan
    • Journal of Information Processing Systems
    • /
    • 제8권4호
    • /
    • pp.555-566
    • /
    • 2012
  • The cloud environment makes it possible to analyze large data sets in a scalable computing infrastructure. In the bioinformatics field, the applications are composed of the complex workflow tasks, which require huge data storage as well as a computing-intensive parallel workload. Many approaches have been introduced in distributed solutions. However, they focus on static resource provisioning with a batch-processing scheme in a local computing farm and data storage. In the case of a large-scale workflow system, it is inevitable and valuable to outsource the entire or a part of their tasks to public clouds for reducing resource costs. The problems, however, occurred at the transfer time for huge dataset as well as there being an unbalanced completion time of different problem sizes. In this paper, we propose an adaptive resource-provisioning scheme that includes run-time data distribution and collection services for hiding the data transfer time. The proposed adaptive resource-provisioning scheme optimizes the allocation ratio of computing elements to the different datasets in order to minimize the total makespan under resource constraints. We conducted the experiments with a well-known sequence alignment algorithm and the results showed that the proposed scheme is efficient for the cloud environment.