• Title/Summary/Keyword: 파일 입출력

Search Result 140, Processing Time 0.024 seconds

BomBart : Web-based Programming Environment Support to Graphic User Interface (그래픽 유저 인터페이스를 지원하는 웹 기반 프로그래밍 환경 '봄밭'의 설계 및 구현)

  • Cheon, Junseok;Song, Jiwon;Woo, Gyun
    • The Journal of the Korea Contents Association
    • /
    • v.17 no.5
    • /
    • pp.317-325
    • /
    • 2017
  • There has been a growing interest in programming education recently. However, to use most programming languages the corresponding compiler and IDE have to be installed on computers. To tackle this issue, though there developed several web-based programming environment including Eclipse Che and JDOODLE, most of them does not support GUI nor Korean programming languages. This paper proposes a web-based programming environment called Bombart, which supports Saesark, a Korean programming language, with GUI output. It also supports a console-based input and output. To support both kinds of interfaces, two compiling subsystems are designed and implemented. To test the effectiveness of the GUI support of Bombart, all the Java tutorial codes on GUI are translated into Saesark and executed on top of Bombart. According to this test, 81.42% of codes can be successfully converted and executed.

Efficient Prefetching and Asynchronous Writing for Flash Memory (플래시 메모리를 위한 효율적인 선반입과 비동기 쓰기 기법)

  • Park, Kwang-Hee;Kim, Deok-Hwan
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.15 no.2
    • /
    • pp.77-88
    • /
    • 2009
  • According to the size of NAND flash memory as the storage system of mobile device becomes large, the performance of address translation and life cycle management in FTL (Flash Translation Layer) to interact with file system becomes very important. In this paper, we propose the continuity counters, which represent the number of continuous physical blocks whose logical addresses are consecutive, to reduce the number of address translation. Furthermore we propose the prefetching method which preloads frequently accessed pages into main memory to enhance I/O performance of flash memory. Besides, we use the 2-bit write prediction and asynchronous writing method to predict addresses repeatedly referenced from host and prevent from writing overhead. The experiments show that the proposed method improves the I/O performance and extends the life cycle of flash memory. As a result, proposed CFTL (Clustered Flash Translation Layer)'s performance of address translation is faster 20% than conventional FTLs. Furthermore, CFTL is reduced about 50% writing time than that of conventional FTLs.

Design and Implementation of Private Folder Management Systems for the Security of User Data on Multi-user Environments (다중 사용자 환경에서 개인 데이터 보안을 위한 개인 폴더 관리 시스템의 설계 및 구현)

  • Park, Yong-Hun;Park, Hyeong-Soon;Kim, Hak-Chul;Lee, Hyo-Joon;Jang, Yong-Jin;Lim, Jong-Tae;Jang, Su-Min;Seo, Won-Seok;Yoo, Jae-Soo
    • The Journal of the Korea Contents Association
    • /
    • v.10 no.5
    • /
    • pp.52-61
    • /
    • 2010
  • In recent, the interests of multi-user systems have been increased. Multi-user systems allow a number of users to access the system simultaneously. Security is one of the key issues to be addressed in a multi-user environment. We propose a solution based on the NTFS file system that provides the personal data security and considers the convenience of users. The system increases the convenience of users by simplifying the complexity of the security setting on NTFS. We also propose a variety of policies that prevent from the conflicts incurring when different users set up the personal folders simultaneously and do not set up the important folders such as the window system folders as personal folders. In addition, our system supports the function of setting up the prohibition folder lists so that no one can not set the folder to their personal folders.

SSD-based RAID-6 System Architecture for Reliability and Performance Enhancement (신뢰성 향상과 성능개선을 위해 다양한 Erasure 코드를 적용한 SSD 기반 RAID-6 시스템 구조)

  • Song, Jae-Seok;Huh, Joon-Moo;Yang, Yu-Seok;Kim, Deok-Hwan
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.47 no.6
    • /
    • pp.47-56
    • /
    • 2010
  • HDD-based RAIDs have been used in high-capacity storage systems for traditional data server. However, their data reliability are relatively low and they consume lots of power since hard disk drive is weak on shock and its power consumption is high due to frequent spindle motor operation. Therefore, this paper presents new SSD based RAID system architecture using various erasure codes. The proposed methode applys Reed-Solomon, EVENODD, and Liberation coding schemes onto file system level and device driver level, respectively. Besides, it uses data allocation method to minimize the side effect of reducing the lifespan of SSD. Detail experimental results show that Liberation code increase wear-leveling rates of SSD based RAID-6 more than other codes. The SSD based RAID system applying erasure codes at the device driver level shows better performance than that at the file system level. I/O performance of RAID-6 system using SSD is 4.5%~8.5% higher than that of using HDD and the power consumption of the RAID system using SSD is 18%~40% less than that of using HDD.

Design and Implementation of a Backup System for Object based Storage Systems (객체기반 저장시스템을 위한 백업시스템 설계 및 구현)

  • Yun, Jong-Hyeon;Lee, Seok-Jae;Jang, Su-Min;Yoo, Jae-Soo;Kim, Hong-Yeon;Kim, Jun
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.35 no.1
    • /
    • pp.1-17
    • /
    • 2008
  • Recently, the object based storage devices systems(OSDs) have been actively researched. They are different from existing block based storage systems(BSDs) in terms of the storage unit. The storage unit of the OSDs is an object that includes the access methods, the attributes of data, the security information, and so on. The object has no size limit and no influence on the internal storage structures. Therefore, the OSDs improve the I/O throughput and the scalability. But the backup systems for the OSDs still use the existing backup techniques for the BSDs. As a result, they need much backup time and do not utilize the characteristics of the OSDs. In this paper, we design and implement a new object based backup system that utilizes the features of the OSDs. Our backup system significantly improves the backup time over existing backup systems because the raw objects are directly transferred to the backup devices in our system. It also restores the backup data much faster than the existing systems when system failures occur. In addition, it supports various types of backup and restore requests.

Data Deduplication Method using Locality-based Chunking policy for SSD-based Server Storages (SSD 기반 서버급 스토리지를 위한 지역성 기반 청킹 정책을 이용한 데이터 중복 제거 기법)

  • Lee, Seung-Kyu;Kim, Ju-Kyeong;Kim, Deok-Hwan
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.50 no.2
    • /
    • pp.143-151
    • /
    • 2013
  • NAND flash-based SSDs (Solid State Drive) have advantages of fast input/output performance and low power consumption so that they could be widely used as storages on tablet, desktop PC, smart-phone, and server. But, SSD has the disadvantage of wear-leveling due to increase of the number of writes. In order to improve the lifespan of the SSD, a variety of data deduplication techniques have been introduced. General fixed-size splitting method allocates fixed size of chunk without considering locality of data so that it may execute unnecessary chunking and hash key generation, and variable-size splitting method occurs excessive operation since it compares data byte-by-byte for deduplication. This paper proposes adaptive chunking method based on application locality and file name locality of written data in SSD-based server storage. The proposed method split data into 4KB or 64KB chunks adaptively according to application locality and file name locality of duplicated data so that it can reduce the overhead of chunking and hash key generation and prevent duplicated data writing. The experimental results show that the proposed method can enhance write performance, reduce power consumption and operation time compared to existing variable-size splitting method and fixed size splitting method using 4KB.

VLSI Design of DWT-based Image Processor for Real-Time Image Compression and Reconstruction System (실시간 영상압축과 복원시스템을 위한 DWT기반의 영상처리 프로세서의 VLSI 설계)

  • Seo, Young-Ho;Kim, Dong-Wook
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.29 no.1C
    • /
    • pp.102-110
    • /
    • 2004
  • In this paper, we propose a VLSI structure of real-time image compression and reconstruction processor using 2-D discrete wavelet transform and implement into a hardware which use minimal hardware resource using ASIC library. In the implemented hardware, Data path part consists of the DWT kernel for the wavelet transform and inverse transform, quantizer/dequantizer, the huffman encoder/huffman decoder, the adder/buffer for the inverse wavelet transform, and the interface modules for input/output. Control part consists of the programming register, the controller which decodes the instructions and generates the control signals, and the status register for indicating the internal state into the external of circuit. According to the programming condition, the designed circuit has the various selective output formats which are wavelet coefficient, quantization coefficient or index, and Huffman code in image compression mode, and Huffman decoding result, reconstructed quantization coefficient, and reconstructed wavelet coefficient in image reconstructed mode. The programming register has 16 stages and one instruction can be used for a horizontal(or vertical) filtering in a level. Since each register automatically operated in the right order, 4-level discrete wavelet transform can be executed by a programming. We synthesized the designed circuit with synthesis library of Hynix 0.35um CMOS fabrication using the synthesis tool, Synopsys and extracted the gate-level netlist. From the netlist, timing information was extracted using Vela tool. We executed the timing simulation with the extracted netlist and timing information using NC-Verilog tool. Also PNR and layout process was executed using Apollo tool. The Implemented hardware has about 50,000 gate sizes and stably operates in 80MHz clock frequency.

Spatial Computation on Spark Using GPGPU (GPGPU를 활용한 스파크 기반 공간 연산)

  • Son, Chanseung;Kim, Daehee;Park, Neungsoo
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.5 no.8
    • /
    • pp.181-188
    • /
    • 2016
  • Recently, as the amount of spatial information increases, an interest in the study of spatial information processing has been increased. Spatial database systems extended from the traditional relational database systems are difficult to handle large data sets because of the scalability. SpatialHadoop extended from Hadoop system has a low performance, because spatial computations in SpationHadoop require a lot of write operations of intermediate results to the disk, resulting in the performance degradation. In this paper, Spatial Computation Spark(SC-Spark) is proposed, which is an in-memory based distributed processing framework. SC-Spark is extended from Spark in order to efficiently perform the spatial operation for large-scale data. In addition, SC-Spark based on the GPGPU is developed to improve the performance of the SC-Spark. SC-Spark uses the advantage of the Spark holding intermediate results in the memory. And GPGPU-based SC-Spark can perform spatial operations in parallel using a plurality of processing elements of an GPU. To verify the proposed work, experiments on a single AMD system were performed using SC-Spark and GPGPU-based SC-Spark for Point-in-Polygon and spatial join operation. The experimental results showed that the performance of SC-Spark and GPGPU-based SC-Spark were up-to 8 times faster than SpatialHadoop.

GWB: An integrated software system for Managing and Analyzing Genomic Sequences (GWB: 유전자 서열 데이터의 관리와 분석을 위한 통합 소프트웨어 시스템)

  • Kim In-Cheol;Jin Hoon
    • Journal of Internet Computing and Services
    • /
    • v.5 no.5
    • /
    • pp.1-15
    • /
    • 2004
  • In this paper, we explain the design and implementation of GWB(Gene WorkBench), which is a web-based, integrated system for efficiently managing and analyzing genomic sequences, Most existing software systems handling genomic sequences rarely provide both managing facilities and analyzing facilities. The analysis programs also tend to be unit programs that include just single or some part of the required functions. Moreover, these programs are widely distributed over Internet and require different execution environments. As lots of manual and conversion works are required for using these programs together, many life science researchers suffer great inconveniences. in order to overcome the problems of existing systems and provide a more convenient one for helping genomic researches in effective ways, this paper integrates both managing facilities and analyzing facilities into a single system called GWB. Most important issues regarding the design of GWB are how to integrate many different analysis programs into a single software system, and how to provide data or databases of different formats required to run these programs. In order to address these issues, GWB integrates different analysis programs byusing common input/output interfaces called wrappers, suggests a common format of genomic sequence data, organizes local databases consisting of a relational database and an indexed sequential file, and provides facilities for converting data among several well-known different formats and exporting local databases into XML files.

  • PDF

Development of Quality Assurance Software for $PRESAGE^{REU}$ Gel Dosimetry ($PRESAGE^{REU}$ 겔 선량계의 분석 및 정도 관리 도구 개발)

  • Cho, Woong;Lee, Jaegi;Kim, Hyun Suk;Wu, Hong-Gyun
    • Progress in Medical Physics
    • /
    • v.25 no.4
    • /
    • pp.233-241
    • /
    • 2014
  • The aim of this study is to develop a new software tool for 3D dose verification using $PRESAGE^{REU}$ Gel dosimeter. The tool included following functions: importing 3D doses from treatment planning systems (TPS), importing 3D optical density (OD), converting ODs to doses, 3D registration between two volumetric data by translational and rotational transformations, and evaluation with 3D gamma index. To acquire correlation between ODs and doses, CT images of a $PRESAGE^{REU}$ Gel with cylindrical shape was acquired, and a volumetric modulated arc therapy (VMAT) plan was designed to give radiation doses from 1 Gy to 6 Gy to six disk-shaped virtual targets along z-axis. After the VMAT plan was delivered to the targets, 3D OD data were reconstructed from 512 projection data from $Vista^{TM}$ optical CT scanner (Modus Medical Devices Inc, Canada) per every 2 hours after irradiation. A curve for converting ODs to doses was derived by comparing TPS dose profile to OD profile along z-axis, and the 3D OD data were converted to the absorbed doses using the curve. Supra-linearity was observed between doses and ODs, and the ODs were decayed about 60% per 24 hours depending on their magnitudes. Measured doses from the $PRESAGE^{REU}$ Gel were well agreed with the TPS doses at central region, but large under-doses were observed at peripheral region at the cylindrical geometry. Gamma passing rate for 3D doses was 70.36% under the gamma criteria of 3% of dose difference and 3 mm of distance to agreement. The low passing rate was resulted from the mismatching of the refractive index between the PRESAGE gel and oil bath in the optical CT scanner. In conclusion, the developed software was useful for 3D dose verification from PRESAGE gel dosimetry, but further improvement of the Gel dosimetry system were required.