• Title/Summary/Keyword: Hive file

Search Result 8, Processing Time 0.023 seconds

Study on Forensic Analysis with Access Control Modification for Registry (레지스트리 접근권한 변조에 관한 포렌식 분석 연구)

  • Kim, Hangi;Kim, Do-Won;Kim, Jongsung
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.26 no.5
    • /
    • pp.1131-1139
    • /
    • 2016
  • In the Hive file format, the sk(Security Key) cell provides access control to registry key. An attacker can figure out secret information on registry or change the security set-up if she could apply modified hive files on system. This paper presents various methods to change access control of registry key by modifying or replacing cell on hive file. We also discuss threats by access control modification and signs of attacks analysis by modified hive files.

Efficient Multimedia Data File Management and Retrieval Strategy on Big Data Processing System

  • Lee, Jae-Kyung;Shin, Su-Mi;Kim, Kyung-Chang
    • Journal of the Korea Society of Computer and Information
    • /
    • v.20 no.8
    • /
    • pp.77-83
    • /
    • 2015
  • The storage and retrieval of multimedia data is becoming increasingly important in many application areas including record management, video(CCTV) management and Internet of Things (IoT). In these applications, the files containing multimedia that need to be stored and managed is tremendous and constantly scaling. In this paper, we propose a technique to retrieve a very large number of files, in multimedia format, using the Hadoop Framework. Our strategy is based on the management of metadata that describes the characteristic of files that are stored in Hadoop Distributed File System (HDFS). The metadata schema is represented in Hbase and looked up using SQL On Hadoop (Hive, Tajo). Both the Hbase, Hive and Tajo are part of the Hadoop Ecosystem. Preliminary experiment on multimedia data files stored in HDFS shows the viability of the proposed strategy.

Research of Soft-Interface Creation and Provision Methodology According to Applications Based on Mobile Device Environment (모바일 디바이스 환경에서 어플리케이션에 따른 소프트 인터페이스 제작 및 제공 방안 연구)

  • Cho, Changhee;Park, Sanghyun;Lee, Sang-Joon;Kim, Jinsul
    • Journal of Digital Contents Society
    • /
    • v.14 no.4
    • /
    • pp.513-519
    • /
    • 2013
  • In this paper, we provide interfaces according to user application environments and provide tools through web-site that users can create interface to apply a wide range of application environment. HTML5 is used in the creation processing, so users can create various interfaces by dragging mouse and apply it to multimedia, game applications as well as documents by using the ASCII code and key events that are provided in the Android OS. Database of interfaces is stored in HDFS (Hadoop Distributed File System) based on Hadoop for management and users can have their own designed interface or select interfaces through simple login any time. In order to provide interface quickly, HIVE based on Hadoop is used for search and the data is provided in XML file which smart mobile can process quickly.

A Design and Implementation of Application virtualization method using virtual supporting system and Copy-on-Write scheme (가상화 지원 시스템과 Copy-on-Write 방법을 이용한 응용프로그램 가상화 방법의 설계 및 구현)

  • Choi, Won Hyuk;Choi, Ji Hoon;Kim, Won-Young;Choi, Wan
    • Proceedings of the Korea Contents Association Conference
    • /
    • 2007.11a
    • /
    • pp.807-811
    • /
    • 2007
  • In this paper, we introduce an application virtualization method that could be supported without changing and modifying any resources and execution environment on host system, using non-installable portable software format that could be executed by one-click on any host without installing process. For the purpose of designing and implementing an application virtualization method, we construct virtual supporting system that includes virtual file system and virtual registry hive on kernel level of Windows operating system. Also, when users execute portable software on any hosts to provide consistency on using portable software, we describe method of processing information of appending and modifying files and registry datum on virtual file system and virtual registry hive through Copy-on-Write scheme.

  • PDF

Method of estimating the deleted time of applications using Amcache.hve (앰캐시(Amcache.hve) 파일을 활용한 응용 프로그램 삭제시간 추정방법)

  • Kim, Moon-Ho;Lee, Sang-jin
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.25 no.3
    • /
    • pp.573-583
    • /
    • 2015
  • Amcache.hve file is a registry hive file regarding Program Compatibility Assistant, which stores the executed information of applications. With Amcache.hve file, We can know execution path, first executed time as well as deleted time. Since it checks both the first install time and deleted time, Amcache.hve file can be used to draw up the overall timeline of applications when used with the Prefetch files and Iconcache.db files. Amcache.hve file is also an important artifact to record the traces of anti-forensic programs, portable programs and external storage devices. This paper illustrates the features of Amcache.hve file and methods for utilization in digital forensics such as estimation of deleted time of applications.

Big data-based piping material analysis framework in offshore structure for contract design

  • Oh, Min-Jae;Roh, Myung-Il;Park, Sung-Woo;Chun, Do-Hyun;Myung, Sehyun
    • Ocean Systems Engineering
    • /
    • v.9 no.1
    • /
    • pp.79-95
    • /
    • 2019
  • The material analysis of an offshore structure is generally conducted in the contract design phase for the price quotation of a new offshore project. This analysis is conducted manually by an engineer, which is time-consuming and can lead to inaccurate results, because the data size from previous projects is too large, and there are so many materials to consider. In this study, the piping materials in an offshore structure are analyzed for contract design using a big data framework. The big data technologies used include HDFS (Hadoop Distributed File System) for data saving, Hive and HBase for the database to handle the saved data, Spark and Kylin for data processing, and Zeppelin for user interface and visualization. The analyzed results show that the proposed big data framework can reduce the efforts put toward contract design in the estimation of the piping material cost.

Design of Distributed Hadoop Full Stack Platform for Big Data Collection and Processing (빅데이터 수집 처리를 위한 분산 하둡 풀스택 플랫폼의 설계)

  • Lee, Myeong-Ho
    • Journal of the Korea Convergence Society
    • /
    • v.12 no.7
    • /
    • pp.45-51
    • /
    • 2021
  • In accordance with the rapid non-face-to-face environment and mobile first strategy, the explosive increase and creation of many structured/unstructured data every year demands new decision making and services using big data in all fields. However, there have been few reference cases of using the Hadoop Ecosystem, which uses the rapidly increasing big data every year to collect and load big data into a standard platform that can be applied in a practical environment, and then store and process well-established big data in a relational database. Therefore, in this study, after collecting unstructured data searched by keywords from social network services based on Hadoop 2.0 through three virtual machine servers in the Spring Framework environment, the collected unstructured data is loaded into Hadoop Distributed File System and HBase based on the loaded unstructured data, it was designed and implemented to store standardized big data in a relational database using a morpheme analyzer. In the future, research on clustering and classification and analysis using machine learning using Hive or Mahout for deep data analysis should be continued.

An Interconnection Method for Streaming Framework and Multimedia Database (스트리밍 프레임워크와 멀티미디어 데이타베이스와의 연동기법)

  • Lee, Jae-Wook;Lee, Sung-Young;Lee, Jong-Won
    • Journal of KIISE:Software and Applications
    • /
    • v.29 no.7
    • /
    • pp.436-449
    • /
    • 2002
  • This paper describes on our experience of developing the Database Connector as an interconnection method between multimedia database, and the streaming framework. It is possible to support diverse and mature multimedia database services such as retrieval and join operation during the streaming if an interconnection method is provided in between streaming system and multimedia databases. The currently available interconnection schemes, however have mainly used the file systems or the relational databases that are Implemented with separated form of meta data, which deafs with information of multimedia contents, and streaming data which deals with multimedia data itself. Consequently, existing interconnection mechanisms could not come up with many virtues of multimedia database services during the streaming operation. In order to resolve these drawbacks, we propose a novel scheme for an interconnection between streaming framework and multimedia database, called the Inter-Process Communication (IPC) based Database connector, under the assumption that two systems are located in a same host. We define four transaction primitives; Read, Write, Find, Play, as well as define the interface for transactions that are implemented based on the plug-in, which in consequence can extend to other multimedia databases that will come for some later years. Our simulation study show that performance of the proposed IPC based interconnection scheme is not much far behind compared with that of file systems.