• Title/Summary/Keyword: in-memory file system

Search Result 241, Processing Time 0.029 seconds

Private Key Management Scheme Using Secret Sharing and Steganography (비밀 분산 및 스테가노그래피를 이용한 개인 키 보관 기법)

  • Lee, Jaeheung
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.17 no.4
    • /
    • pp.35-41
    • /
    • 2017
  • This paper introduces a new method for storing a private key. This method can be achieved by dividing the private key into "n" pieces by a (k, n) secret sharing method, and then storing each piece into photo files utilizing a steganography method. In this way, a user can restore a private key as long as he can remember the locations of "k" photos among the entire photo files. Attackers, meanwhile, will find it extremely difficult to extract the private key if a user has hidden the pieces of the private key into numerous photo files stored in the system. It also provides a high degree of user convenience, as the user can restore the private key from his memory of k positions among n photo files. Coupled with this, a certain level of security can be guaranteed because the attacker cannot restore a private key, even if he knows k-1 photo file locations.

The Development of a Non-Linear Finite Element Model for Ductile Fracture Analysis - For Mini-Computer - (연성파괴 해석을 위한 비선형 유한요소 모델의 개발 -소형 컴퓨터를 위한 -)

  • 정세희;조규종
    • Transactions of the Korean Society of Mechanical Engineers
    • /
    • v.10 no.1
    • /
    • pp.25-33
    • /
    • 1986
  • In this paper, the frontal method based elastic-plastic F.E.M. program for mini-computer was developed. Since, the executable source program size was restricted by the system core memory size on the mini-computer, the active variables were memorized by the element base and the nonactive varables were memorized to the external disc file. The active variables of the finally developed program were reduced enough to execute about 1,000 freedom finite element on the mini-computer on which available variables were restricted as 32,767 integers. A modified CT fracture test specimen was examined to test the developed program. The calculated results were compared with experimental results concerning on the crack tip plastic deformation zone. Recrystallization technique was adopted to visualize the intensive plastic deformation regions. The Von-Mises criterion based calculation results were well agreed with the experimental results in the intensive plastic region which was over than 2% offset strain. The F.E.M. results using the developed program were well agreed with the theoritical plastic boundary which was calculated by the stress intensity factor as r$_{p}$=(K$_{1}$$^{2}$/2.pi..sigma.$_{y}$$^{2}$).f(.theta.).).).

Research on the Design of TPO(Time, Place, 0Occasion)-Shift System for Mobile Multimedia Devices (휴대용 멀티미디어 디바이스를 위한 TPO(Time, Place, Occasion)-Shift 시스템 설계에 대한 연구)

  • Kim, Dae-Jin;Choi, Hong-Sub
    • Journal of the Korea Society of Computer and Information
    • /
    • v.14 no.2
    • /
    • pp.9-16
    • /
    • 2009
  • While the broadband network and multimedia technology are being developed, the commercial market of digital contents as well as using IPTV has been widely spreading. In this background, Time-Shift system is developed for requirement of multimedia. This system is independent of Time but is not independent of Place and Occasion. For solving these problems, in this paper, we propose the TPO(Time, Place, Occasion)-Shift system for mobile multimedia devices. The profile that can be applied to the mobile multimedia devices is much different from that of the setter-box. And general mobile multimedia devices could not have such large memories that is for multimedia data. So it is important to continuously store and manage those multimedia data in limited capacity with mobile device's profile. Therefore we compose the basket in a way using defined time unit and manage these baskets for effective buffer management. In addition. since the file name of basket is made up to include a basket's time information, we can make use of this time information as DTS(Decoding Time Stamp). When some multimedia content is converted to be available for portable multimedia devices, we are able to compose new formatted contents using such DTS information. Using basket based buffer systems, we can compose the contents by real time in mobile multimedia devices and save some memory. In order to see the system's real-time operation and performance, we implemented the proposed TPO-Shift system on the basis of mobile device, MS340. And setter-box are desisted by using directshow player under Windows Vista environment. As a result, we can find the usefulness and real-time operation of the proposed systems.

Design and Implementation of an Efficient Web Services Data Processing Using Hadoop-Based Big Data Processing Technique (하둡 기반 빅 데이터 기법을 이용한 웹 서비스 데이터 처리 설계 및 구현)

  • Kim, Hyun-Joo
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.16 no.1
    • /
    • pp.726-734
    • /
    • 2015
  • Relational databases used by structuralizing data are the most widely used in data management at present. However, in relational databases, service becomes slower as the amount of data increases because of constraints in the reading and writing operations to save or query data. Furthermore, when a new task is added, the database grows and, consequently, requires additional infrastructure, such as parallel configuration of hardware, CPU, memory, and network, to support smooth operation. In this paper, in order to improve the web information services that are slowing down due to increase of data in the relational databases, we implemented a model to extract a large amount of data quickly and safely for users by processing Hadoop Distributed File System (HDFS) files after sending data to HDFSs and unifying and reconstructing the data. We implemented our model in a Web-based civil affairs system that stores image files, which is irregular data processing. Our proposed system's data processing was found to be 0.4 sec faster than that of a relational database system. Thus, we found that it is possible to support Web information services with a Hadoop-based big data processing technique in order to process a large amount of data, as in conventional relational databases. Furthermore, since Hadoop is open source, our model has the advantage of reducing software costs. The proposed system is expected to be used as a model for Web services that provide fast information processing for organizations that require efficient processing of big data because of the increase in the size of conventional relational databases.

Design and Implementation of Flash Translation Layer with O(1) Crash Recovery Time (O(1) 크래시 복구 수행시간을 갖는 FTL의 설계와 구현)

  • Park, Joon Young;Park, Hyunchan;Yoo, Chuck
    • KIISE Transactions on Computing Practices
    • /
    • v.21 no.10
    • /
    • pp.639-644
    • /
    • 2015
  • The capacity of flash-based storage such as Solid State Drive(SSD) and embedded Multi Media Card(eMMC) is ever-increasing because of the needs from the end-users. However, if a flash-based storage crashes, such as during power failure, the flash translation layer(FTL) is responsible for the crash recovery based on the entire flash memory. The recovery time increases as the capacity of the flash-based storages increases. We propose O1FTL with O(1) crash recovery time that is independent of the flash capacity. O1FTL adopts the working area technique suggested for the flash file system and evaluates the design on a real hardware platform. The results show that O1FTL achieves a crash recovery time that is independent of the capacity and the overhead, in terms of I/O performance, and achieves a low P/E cycle.

The Study of Response Model & Mechanism Against Windows Kernel Compromises (Windows 커널 공격기법의 대응 모델 및 메커니즘에 관한 연구)

  • Kim, Jae-Myong;Lee, Dong-Hwi;J. Kim, Kui-Nam
    • Convergence Security Journal
    • /
    • v.6 no.3
    • /
    • pp.1-12
    • /
    • 2006
  • Malicious codes have been widely documented and detected in information security breach occurrences of Microsoft Windows platform. Legacy information security systems are particularly vulnerable to breaches, due to Window kernel-based malicious codes, that penetrate existing protection and remain undetected. To date there has not been enough quality study into and information sharing about Windows kernel and inner code mechanisms, and this is the core reason for the success of these codes into entering systems and remaining undetected. This paper focus on classification and formalization of type target and mechanism of various Windows kernel-based attacks, and will present suggestions for effective response methodologies in the categories of, "Kernel memory protection", "Process & driver protection" and "File system & registry protection". An effective Windows kernel protection system will be presented through the collection and analysis of Windows kernel and inside mechanisms, and through suggestions for the implementation methodologies of unreleased and new Windows kernel protection skill. Results presented in this paper will explain that the suggested system be highly effective and has more accurate for intrusion detection ratios, then the current legacy security systems (i.e., virus vaccines and Windows IPS, etc) intrusion detection ratios. So, It is expected that the suggested system provides a good solution to prevent IT infrastructure from complicated and intelligent Windows kernel attacks.

  • PDF

Development of a Remote Multi-Task Debugger for Qplus-T RTOS (Qplus-T RTOS를 위한 원격 멀티 태스크 디버거의 개발)

  • 이광용;김흥남
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.9 no.4
    • /
    • pp.393-409
    • /
    • 2003
  • In this paper, we present a multi-task debugging environment for Qplus-T embedded-system such as internet information appliances. We will propose the structure and functions of a remote multi-task debugging environment supporting environment effective ross-development. And, we are going enhance the communication architecture between the host and target system to provide more efficient cross-development environment. The remote development toolset called Q+Esto consists to several independent support tools: an interactive shell, a remote debugger, a resource monitor, a target manager and a debug agent. Excepting a debug agent, all these support tools reside on the host systems. Using the remote multi-task debugger on the host, the developer can spawn and debug tasks on the target run-time system. It can also be attached to already-running tasks spawned from the application or from interactive shell. Application code can be viewed as C/C++ source, or as assembly-level code. It incorporates a variety of display windows for source, registers, local/global variables, stack frame, memory, event traces and so on. The target manager implements common functions that are shared by Q+Esto tools, e.g., the host-target communication, object file loading, and management of target-resident host tool´s memory pool and target system´s symbol-table, and so on. These functions are called OPEn C APIs and they greatly improve the extensibility of the Q+Esto Toolset. The Q+Esto target manager is responsible for communicating between host and target system. Also, there exist a counterpart on the target system communicating with the host target manager, which is called debug agent. Debug agent is a daemon task on real-time operating systems in the target system. It gets debugging requests from the host tools including debugger via target manager, interprets the requests, executes them and sends the results to the host.

Improved Original Entry Point Detection Method Based on PinDemonium (PinDemonium 기반 Original Entry Point 탐지 방법 개선)

  • Kim, Gyeong Min;Park, Yong Su
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.7 no.6
    • /
    • pp.155-164
    • /
    • 2018
  • Many malicious programs have been compressed or encrypted using various commercial packers to prevent reverse engineering, So malicious code analysts must decompress or decrypt them first. The OEP (Original Entry Point) is the address of the first instruction executed after returning the encrypted or compressed executable file back to the original binary state. Several unpackers, including PinDemonium, execute the packed file and keep tracks of the addresses until the OEP appears and find the OEP among the addresses. However, instead of finding exact one OEP, unpackers provide a relatively large set of OEP candidates and sometimes OEP is missing among candidates. In other words, existing unpackers have difficulty in finding the correct OEP. We have developed new tool which provides fewer OEP candidate sets by adding two methods based on the property of the OEP. In this paper, we propose two methods to provide fewer OEP candidate sets by using the property that the function call sequence and parameters are same between packed program and original program. First way is based on a function call. Programs written in the C/C++ language are compiled to translate languages into binary code. Compiler-specific system functions are added to the compiled program. After examining these functions, we have added a method that we suggest to PinDemonium to detect the unpacking work by matching the patterns of system functions that are called in packed programs and unpacked programs. Second way is based on parameters. The parameters include not only the user-entered inputs, but also the system inputs. We have added a method that we suggest to PinDemonium to find the OEP using the system parameters of a particular function in stack memory. OEP detection experiments were performed on sample programs packed by 16 commercial packers. We can reduce the OEP candidate by more than 40% on average compared to PinDemonium except 2 commercial packers which are can not be executed due to the anti-debugging technique.

A Distributed VOD Server Based on Virtual Interface Architecture and Interval Cache (버추얼 인터페이스 아키텍처 및 인터벌 캐쉬에 기반한 분산 VOD 서버)

  • Oh, Soo-Cheol;Chung, Sang-Hwa
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.33 no.10
    • /
    • pp.734-745
    • /
    • 2006
  • This paper presents a PC cluster-based distributed VOD server that minimizes the load of an interconnection network by adopting the VIA communication protocol and the interval cache algorithm. Video data is distributed to the disks of the distributed VOD server and each server node receives the data through the interconnection network and sends it to clients. The load of the interconnection network increases because of the large amount of video data transferred. This paper developed a distributed VOD file system, which is based on VIA, to minimize cost using interconnection network when accessing remote disks. VIA is a user-level communication protocol removing the overhead of TCP/IP. This papers also improved the performance of the interconnection network by expanding the maximum transfer size of VIA. In addition, the interval cache reduces traffic on the interconnection network by caching, in main memory, the video data transferred from disks of remote server nodes. Experiments using the distributed VOD server of this paper showed a maximum performance improvement of 21.3% compared with a distributed VOD server without VIA and the interval cache, when used with a four-node PC cluster.

Development of Information Technology Infrastructures through Construction of Big Data Platform for Road Driving Environment Analysis (도로 주행환경 분석을 위한 빅데이터 플랫폼 구축 정보기술 인프라 개발)

  • Jung, In-taek;Chong, Kyu-soo
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.19 no.3
    • /
    • pp.669-678
    • /
    • 2018
  • This study developed information technology infrastructures for building a driving environment analysis platform using various big data, such as vehicle sensing data, public data, etc. First, a small platform server with a parallel structure for big data distribution processing was developed with H/W technology. Next, programs for big data collection/storage, processing/analysis, and information visualization were developed with S/W technology. The collection S/W was developed as a collection interface using Kafka, Flume, and Sqoop. The storage S/W was developed to be divided into a Hadoop distributed file system and Cassandra DB according to the utilization of data. Processing S/W was developed for spatial unit matching and time interval interpolation/aggregation of the collected data by applying the grid index method. An analysis S/W was developed as an analytical tool based on the Zeppelin notebook for the application and evaluation of a development algorithm. Finally, Information Visualization S/W was developed as a Web GIS engine program for providing various driving environment information and visualization. As a result of the performance evaluation, the number of executors, the optimal memory capacity, and number of cores for the development server were derived, and the computation performance was superior to that of the other cloud computing.