• Title/Summary/Keyword: multimedia storage

Search Result 344, Processing Time 0.028 seconds

The Research On the Energy Storage System Using SuperCapacitor (슈퍼커패시터를 적용한 에너지 저장시스템 설계에 관한 연구)

  • Kim, IL-Song
    • Asia-pacific Journal of Multimedia Services Convergent with Art, Humanities, and Sociology
    • /
    • v.8 no.11
    • /
    • pp.215-222
    • /
    • 2018
  • In this paper, the research on the energy storage system adapting super-capacitor has been performed. The most advanced features compared to the conventional lead-acid battery systems is that it can obtain high power capability due to the super capacitor power characteristics. The suggested system can attain high power in short times and achieve high power quality improvements. The application areas are power quality improvement system, motor start power which requires high power during transient times. The energy conversion system consists of bi-directional converter and inverter and advantages of high speed, high power charging and discharging performances. The design steps for the two loop controller of the bi-directional inverter are suggested and verified by the experiment and manufacturing. The two loop controller design starts from linearized transfer function which is calculated from the state averaging model including state decoupling method. The current controller requirements are 20% overshoot and settling time and voltage controller are no overshoot and settling time which is 10 times longer than current controller. The design is verified from the step input response. The designed controllers have unity power factor characteristics and thus can improve the power quality of the grid. It also has fast response time and zero steady state error.

Design and Implementation of MongoDB-based Unstructured Log Processing System over Cloud Computing Environment (클라우드 환경에서 MongoDB 기반의 비정형 로그 처리 시스템 설계 및 구현)

  • Kim, Myoungjin;Han, Seungho;Cui, Yun;Lee, Hanku
    • Journal of Internet Computing and Services
    • /
    • v.14 no.6
    • /
    • pp.71-84
    • /
    • 2013
  • Log data, which record the multitude of information created when operating computer systems, are utilized in many processes, from carrying out computer system inspection and process optimization to providing customized user optimization. In this paper, we propose a MongoDB-based unstructured log processing system in a cloud environment for processing the massive amount of log data of banks. Most of the log data generated during banking operations come from handling a client's business. Therefore, in order to gather, store, categorize, and analyze the log data generated while processing the client's business, a separate log data processing system needs to be established. However, the realization of flexible storage expansion functions for processing a massive amount of unstructured log data and executing a considerable number of functions to categorize and analyze the stored unstructured log data is difficult in existing computer environments. Thus, in this study, we use cloud computing technology to realize a cloud-based log data processing system for processing unstructured log data that are difficult to process using the existing computing infrastructure's analysis tools and management system. The proposed system uses the IaaS (Infrastructure as a Service) cloud environment to provide a flexible expansion of computing resources and includes the ability to flexibly expand resources such as storage space and memory under conditions such as extended storage or rapid increase in log data. Moreover, to overcome the processing limits of the existing analysis tool when a real-time analysis of the aggregated unstructured log data is required, the proposed system includes a Hadoop-based analysis module for quick and reliable parallel-distributed processing of the massive amount of log data. Furthermore, because the HDFS (Hadoop Distributed File System) stores data by generating copies of the block units of the aggregated log data, the proposed system offers automatic restore functions for the system to continually operate after it recovers from a malfunction. Finally, by establishing a distributed database using the NoSQL-based Mongo DB, the proposed system provides methods of effectively processing unstructured log data. Relational databases such as the MySQL databases have complex schemas that are inappropriate for processing unstructured log data. Further, strict schemas like those of relational databases cannot expand nodes in the case wherein the stored data are distributed to various nodes when the amount of data rapidly increases. NoSQL does not provide the complex computations that relational databases may provide but can easily expand the database through node dispersion when the amount of data increases rapidly; it is a non-relational database with an appropriate structure for processing unstructured data. The data models of the NoSQL are usually classified as Key-Value, column-oriented, and document-oriented types. Of these, the representative document-oriented data model, MongoDB, which has a free schema structure, is used in the proposed system. MongoDB is introduced to the proposed system because it makes it easy to process unstructured log data through a flexible schema structure, facilitates flexible node expansion when the amount of data is rapidly increasing, and provides an Auto-Sharding function that automatically expands storage. The proposed system is composed of a log collector module, a log graph generator module, a MongoDB module, a Hadoop-based analysis module, and a MySQL module. When the log data generated over the entire client business process of each bank are sent to the cloud server, the log collector module collects and classifies data according to the type of log data and distributes it to the MongoDB module and the MySQL module. The log graph generator module generates the results of the log analysis of the MongoDB module, Hadoop-based analysis module, and the MySQL module per analysis time and type of the aggregated log data, and provides them to the user through a web interface. Log data that require a real-time log data analysis are stored in the MySQL module and provided real-time by the log graph generator module. The aggregated log data per unit time are stored in the MongoDB module and plotted in a graph according to the user's various analysis conditions. The aggregated log data in the MongoDB module are parallel-distributed and processed by the Hadoop-based analysis module. A comparative evaluation is carried out against a log data processing system that uses only MySQL for inserting log data and estimating query performance; this evaluation proves the proposed system's superiority. Moreover, an optimal chunk size is confirmed through the log data insert performance evaluation of MongoDB for various chunk sizes.

Proxy Caching Scheme Based on the User Access Pattern Analysis for Series Video Data (시리즈 비디오 데이터의 접근 패턴에 기반한 프록시 캐슁 기법)

  • Hong, Hyeon-Ok;Park, Seong-Ho;Chung, Ki-Dong
    • Journal of Korea Multimedia Society
    • /
    • v.7 no.8
    • /
    • pp.1066-1077
    • /
    • 2004
  • Dramatic increase in the number of Internet users want highly qualified service of continuous media contents on the web. To solve these problems, we present two network caching schemes(PPC, PPCwP) which consider the characteristics of continuous media objects and user access pattern in this paper. While there are plenty of reasons to create rich media contents, delivering this high bandwidth contents over the internet presents problems such as server overload, network congestion and client-perceived latency. PPC scheme periodically calculates the popularity of objects based on the playback quantity and determines the optimal size of the initial fraction of a continuous media object to be cached in proportion to the calculated popularity. PPCwP scheme calculates the expected popularity using the series information and prefetches the expected initial fraction of newly created continuous media objects. Under the PPCwP scheme, the initial client-perceived latency and the data transferred from a remote server can be reduced and limited cache storage space can be utilized efficiently. Trace-driven simulation have been performed to evaluate the presented caching schemes using the log-files of iMBC. Through these simulations, PPC and PPCwP outperforms LRU and LFU in terms of BHR and DSR.

  • PDF

Development of a Metamodel-Based Healthcare Service System using OSGi Component Platform (OSGi 컴포넌트 플랫폼을 이용한 메타모델 기반의 건강관리 서비스 시스템 개발)

  • Kim, Tae-Woong;Kim, Hee-Cheol
    • Journal of Korea Multimedia Society
    • /
    • v.14 no.1
    • /
    • pp.121-132
    • /
    • 2011
  • A healthcare system is a type of medical information system that performs early detection and prevention in diseases by checking one's health condition periodically. Such a healthcare system is based on the signal obtained from the body. However, the developed existing system represents certain differences in the storage and description of vital signs according to medicare devices and the evaluation method of the system. It brings some disadvantages, such as lacks in the interoperability between systems, increases in the development cost of systems, and absence of a unified system. Thus, this study develops a healthcare system based on a meta model. For establishing this objective, this study describes and stores vital sign data based on the standard meta model of HL7 and applies OCL, which is a mathematical specification language, for defining wellness indexes and extracting data in order to evaluate health risk appraisals in health. In addition, this study implements components based on OSGi and assemble them in order to easily extend various devices and systems. By describing vital data based on the meta model, it represents some advantages that it makes possible to ensure the interoperability between systems and introduce the standardization of the evaluation method of health conditions through defining the wellness index using OCL. Also, it provides dear specifications.

Design and Development of Middleware for Clinical Trial System based on Brain MR Image (뇌 MR 영상기반 임상연구 시스템을 위한 미들웨어 설계 및 개발)

  • Jeon, Woong-Gi;Park, Kyoung-Jong;Lee, Young-Seung;Choi, Hyun-Ju;Jeong, Sang-Wook;Kim, Dong-Eog;Choi, Heung-Kook
    • Journal of Korea Multimedia Society
    • /
    • v.15 no.6
    • /
    • pp.805-813
    • /
    • 2012
  • In this paper, we have designed and developed a middleware for an effectively approaching database to the existed brain disease clinical research system. The brain disease clinical research system was consisted of two parts i.e., a register and an analyzer. Since the register collects the registration data the analyzer yields a statistical data which based on the diverse variables. The middleware has designed to database management and a large data query processing of clients. By separating the function of each feature as a module, the module which was weakened connectivity between functionalities has been implemented the re-use module. And image data module used a new compression method from image to text for an effective management and storage in database. We tested the middleware system using 700 actual clinical medical data. As a result, the total data transmission time was improved maximum 115 times faster than the existing one. Through the improved module structures, it is possible to provide a robust and reliable system operation and enhanced security functionality. In the future, these middleware importances should be increased to the large medical database constructions.

Data Cude Index to Support Integrated Multi-dimensional Concept Hierarchies in Spatial Data Warehouse (공간 데이터웨어하우스에서 통합된 다차원 개념 계층 지원을 위한 데이터 큐브 색인)

  • Lee, Dong-Wook;Baek, Sung-Ha;Kim, Gyoung-Bae;Bae, Hae-Young
    • Journal of Korea Multimedia Society
    • /
    • v.12 no.10
    • /
    • pp.1386-1396
    • /
    • 2009
  • Most decision support functions of spatial data warehouse rely on the OLAP operations upon a spatial cube. Meanwhile, higher performance is always guaranteed by indexing the cube, which stores huge amount of pre-aggregated information. Hierarchical Dwarf was proposed as a solution, which can be taken as an extension of the Dwarf, a compressed index for cube structures. However, it does not consider the spatial dimension and even aggregates incorrectly if there are redundant values at the lower levels. OLAP-favored Searching was proposed as a spatial hierarchy based OLAP operation, which employs the advantages of R-tree. Although it supports aggregating functions well against specified areas, it ignores the operations on the spatial dimensions. In this paper, an indexing approach, which aims at utilizing the concept hierarchy of the spatial cube for decision support, is proposed. The index consists of concept hierarchy trees of all dimensions, which are linked according to the tuples stored in the fact table. It saves storage cost by preventing identical trees from being created redundantly. Also, it reduces the OLAP operation cost by integrating the spatial and aspatial dimensions in the virtual concept hierarchy.

  • PDF

Efficient Publishing Spatial Information as GML for Interoperability of Heterogeneous Spatial Database Systems (이질적인 공간정보시스템의 상호 운용성을 위한 효과적인 지리데이터의 GML 사상)

  • 정원일;배해영
    • Journal of Korea Multimedia Society
    • /
    • v.7 no.1
    • /
    • pp.12-26
    • /
    • 2004
  • In the past, geographic data is constructed and serviced through independent formats of its own according to each GIS(Geographic Information System). Recently the provision of interoperability in GIS is important to efficiently apply the various geographic data between conventional GIS's. Whereupon OGC(Open GIS Consortium) proposed GML(Geography Markup Language) to offer the interoperability between heterogeneous GISs in distributed environments. The GML is an XML encoding for the transport and storage of geographic information, including both the spatial and non-spatial properties of geographic features. Also, the GML includes Web Map Server Implementation Specification to service the GML documents. Accordingly the prototype to provide the reciprocal interchange of geographic information between conventional GIS's and GML documents is widely studied. In this paper, we propose a mapping method of geographic in formation between spatial database and GML for the prototype to support the interoperability between heterogeneous geographic information. For this method, firstly the scheme of converting geographic in Formation of the conventional spatial database into the GML document according to the GML specification is explained, and secondly the scheme to transform geographic information of GML documents to geographic data of spatial database is showed. Consequently, the proposed method is applicable to the framework for integrated geographic information services based on Web by making an offer the interoperability between already built geographic information of conventional GIS's using a mapping method of geographic information between spatial database and GML.

  • PDF

Processing Techniques of Layer Channel Image for 3D Image Effects (3D 영상 효과를 위한 레이어 채널 이미지의 처리 기법)

  • Choi, Hak-Hyun;Kim, Jung-Hee;Lee, Myung-Hak
    • The Journal of the Korea Contents Association
    • /
    • v.8 no.1
    • /
    • pp.272-281
    • /
    • 2008
  • A layer channel, which can express effects on 3D image, is inserted to use it on application rendering effectively. The current method of effect rendering requires individual sources in storage and image processing, because it uses individual and mixed management of images and effects. However, we can save costs and improve results in images processing by processing both image and layer channels together. By changing image format to insert a layer channel in image and adding a hide function to conceal the layer channel and control to make it possible to approach image and layer channels simultaneously during loading image and techniques hiding the layer channel by changing image format with simple techniques, like alpha blending, etc., it is developed to improve reusability and be able to be used in all programs by combining the layer channel and image together, so that images in changed format can be viewed in general image viewers. With the configuration, we can improve processing speed by introducing image and layer channels simultaneously during loading images, and reduce the size of source storage space for layer channel images by inserting a layer channel in 3D images. Also, it allows managing images in 3D image and layer channels simultaneously, enabling effective expressions, and we can expect to use it effectively in multimedia image used in practical applications.

Fault Test Algorithm for MLC NAND-type Flash Memory (MLC NAND-형 플래시 메모리를 위한 고장검출 테스트 알고리즘)

  • Jang, Gi-Ung;Hwang, Phil-Joo;Chang, Hoon
    • Journal of the Institute of Electronics Engineers of Korea SD
    • /
    • v.49 no.4
    • /
    • pp.26-33
    • /
    • 2012
  • As the flash memory has increased the market share of data storage in imbedded system and occupied the most of area in a system, It has a profound impact on system reliability. Flash memory is divided NOR/NAND-type according to the cell array structure, and is classified as SLC(Single Level Cell)/MLC(Multi Level Cell) according to reference voltage. Although NAND-type flash memory is slower than NOR-type, but it has large capacity and low cost. Also, By the effect of demanding mobile market, MLC NAND-type is widely adopted for the purpose of the multimedia data storage. Accordingly, Importance of fault detection algorithm is increasing to ensure MLC NAND-type flash memory reliability. There are many researches about the testing algorithm used from traditional RAM to SLC flash memory and it detected a lot of errors. But the case of MLC flash memory, testing for fault detection, there was not much attempt. So, In this paper, Extend SLC NAND-type flash memory fault detection algorithm for testing MLC NAND-type flash memory and try to reduce these differences.

Energy-efficient Correlated Data Placement Techniques for Multi-disk-based Mobile Systems (다중 디스크 기반 모바일 시스템 대상의 에너지 효율적인 연관 데이타 배치 기법)

  • Kim, Young-Jin;Kwon, Kwon-Taek;Kim, Ji-Hong
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.34 no.3
    • /
    • pp.101-112
    • /
    • 2007
  • Hard disks have been the most prevalent secondary storage devices and these days their usage is becoming more important in mobile computing systems due to I/O intensive applications such as multimedia applications and games. However, significant power consumption in the disk drives still limits battery lifetimes of mobile systems critically. In this paper, we show that using several smaller disks (instead of one large disk) can be an energy-efficient secondary storage solution on typical mobile platforms without a significant performance delay. Also, we propose a novel energy-efficient technique, which clusters related data into groups and migrates the correlated groups to the same disk. We compare this method with the existing data concentration scheme, and also combine them. The experiments show that our technique saves the energy consumption up to 34% when a pair of 1.8' disks is used instead of a single 2.5' disk with a negligible increase in the average response time. The results also show that our method also saves up to 14.8% of disk energy consumption and improve the average I/O response time by up to 10 times over the existing scheme.