• Title/Summary/Keyword: Process of Data Deletion

Search Result 29, Processing Time 0.022 seconds

A Study on the Guideline for the Data Deletion (데이터 폐기 지침 마련을 위한 기초 연구)

  • Lim, Tae-Hoon;Seo, Jik-Soo;Kim, Sun-Young
    • Journal of Information Management
    • /
    • v.41 no.4
    • /
    • pp.165-186
    • /
    • 2010
  • This study aims to suggest the basis and criterion for the data deletion guideline to make the information systems effective and reduce the cost of system management. To make a frame of the guideline, we researched the laws and policies of USA, UK and Australia and the domestic laws and regulations related with the deletion of records. From this paper research, we prepared the draft guideline and gathered the opinion about it. Through this research and survey, we produced out the guideline including the criteria, the process and the way of data deletion. Adopting the guideline to a sample organization, we couldn't find any problem in deleting the unused data.

A Compact Divide-and-conquer Algorithm for Delaunay Triangulation with an Array-based Data Structure (배열기반 데이터 구조를 이용한 간략한 divide-and-conquer 삼각화 알고리즘)

  • Yang, Sang-Wook;Choi, Young
    • Korean Journal of Computational Design and Engineering
    • /
    • v.14 no.4
    • /
    • pp.217-224
    • /
    • 2009
  • Most divide-and-conquer implementations for Delaunay triangulation utilize quad-edge or winged-edge data structure since triangles are frequently deleted and created during the merge process. How-ever, the proposed divide-and-conquer algorithm utilizes the array based data structure that is much simpler than the quad-edge data structure and requires less memory allocation. The proposed algorithm has two important features. Firstly, the information of space partitioning is represented as a permutation vector sequence in a vertices array, thus no additional data is required for the space partitioning. The permutation vector represents adaptively divided regions in two dimensions. The two-dimensional partitioning of the space is more efficient than one-dimensional partitioning in the merge process. Secondly, there is no deletion of edge in merge process and thus no bookkeeping of complex intermediate state for topology change is necessary. The algorithm is described in a compact manner with the proposed data structures and operators so that it can be easily implemented with computational efficiency.

Development of an Editor for Reference Data Library Based on ISO 15926 (ISO 15926 기반의 참조 데이터 라이브러리 편집기의 개발)

  • Jeon, Youngjun;Byon, Su-Jin;Mun, Duhwan
    • Korean Journal of Computational Design and Engineering
    • /
    • v.19 no.4
    • /
    • pp.390-401
    • /
    • 2014
  • ISO 15926 is an international standard for integration of lifecycle data for process plants including oil and gas facilities. From the viewpoint of information modeling, ISO 15926 Parts 2 provides the general data model that is designed to be used in conjunction with reference data. Reference data are standard instances that represent classes, objects, properties, and templates common to a number of users, process plants, or both. ISO 15926 Parts 4 and 7 provide the initial set of classes, objects, properties and the initial set of templates, respectively. User-defined reference data specific to companies or organizations are defined by inheriting from the initial reference data and the initial set of templates. In order to support the extension of reference data and templates, an editor that provides creation, deletion and modification functions of user-defined reference data is needed. In this study, an editor for reference data based on ISO 15926 was developed. Sample reference data were encoded in OWL (web ontology language) according to the specification of ISO 15926 Part 8. iRINGTools and dot15926Editor were benchmarked for the design of GUI (graphical user interface). Reference data search, creation, modification, and deletion functions were implemented with XML (extensible markup language) DOM (document object model), and SPARQL (SPARQL protocol and RDF query language).

The effect of word frequency on the reduction of English CVCC syllables in spontaneous speech

  • Kim, Jungsun
    • Phonetics and Speech Sciences
    • /
    • v.7 no.3
    • /
    • pp.45-53
    • /
    • 2015
  • The current study investigated CVCC syllables in spontaneous American English speech to find out whether such syllables are produced as phonological units with a string of segments, showing a hierarchical structure. Transcribed data from the Buckeye Speech Corpus was used for the analysis in this study. The result of the current study showed that the constituents within a CVCC syllable as a phonological unit may have phonetic variations (namely, the final coda may undergo deletion). First, voiceless alveolar stops were the most frequently deleted when they occurred as the second final coda consonants of a CVCC syllable; this deletion may be an intermediate process on the way from the abstract form CVCC (with the rime VCC) to the actual pronunciation CVC (with the rime VC), a production strategy employed by some individual speakers. Second, in the internal structure of the rime, the proportion of deletion of the final coda consonant depended on the frequency of the word rather than on the position of postvocalic consonants on the sonority hierarchy. Finally, the segment following the consonant cluster proved to have an effect on the reduction of that cluster; more precisely, the following contrast was observed between obstruents and non-obstruents, reflecting the effect of sonority: when the segment following the consonant cluster was an obstruent, the proportion of deletion of the final coda consonant was increased. Among these results, the effect of word frequency played a critical role for promoting the deletion of the second coda consonant for clusters in CVCC syllables in spontaneous speech. The current study implies that the structure of syllables as phonological units can vary depending on individual speakers' lexical representation.

CONTINUOUS QUERY PROCESSING IN A DATA STREAM ENVIRONMENT

  • Lee, Dong-Gyu;Lee, Bong-Jae;Ryu, Keun-Ho
    • Proceedings of the KSRS Conference
    • /
    • 2007.10a
    • /
    • pp.3-5
    • /
    • 2007
  • Many continuous queries are important to be process efficiently in a data stream environment. It is applied a query index technique that takes linear performance irrespective of the number and width of intervals for processing many continuous queries. Previous researches are not able to support the dynamic insertion and deletion to arrange intervals for constructing an index previously. It shows that the insertion and search performance is slowed by the number and width of interval inserted. Many intervals have to be inserted and searched linearly in a data stream environment. Therefore, we propose Hashed Multiple Lists in order to process continuous queries linearly. Proposed technique shows fast linear search performance. It can be utilized the systems applying a sensor network, and preprocessing technique of spatiotemporal data mining.

  • PDF

A Study on the Equipment Maintenance Management Support System for Production Efficiency (생산효율화를 위한 설비보전관리 지원시스템에 관한 연구 -설비보전정보시스템을 중심으로-)

  • 송원섭
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.21 no.48
    • /
    • pp.279-289
    • /
    • 1998
  • This study deals with the schemes of design, plan and operate maintenance management support systems and with the engineering approach for the solutions to build the maintenance management for the production efficiency. Maintenance Management Information System(MMIS) is the task that must focus on machinery historical data and planned maintenance action. Also the efficient supporting system in a maintenance management is achieved by database which is based on process of machinery's failure history. Designing method of maintenance management information system, maintenance modules are consisted of six factors ; machinery's historical data, lubrication control, check sheet, repair work, availability report, and performance report(control board and detailed reports), and then operators can rapidly utilize data in work place. In the implementation of designed model, program coding has been developed by Visual Basic 3.0. Data insertion, deletion and updating which perform menu screen is implemented by reading data from database. Implementation model based on LAN environment and related data is stored in Microsoft DBMS.

  • PDF

The Method of Recovery for Deleted Record of Realm Database (Realm 데이터베이스의 삭제된 레코드 복구 기법)

  • Kim, Junki;Han, Jaehyeok;Choi, Jong-Hyun;Lee, Sangjin
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.28 no.3
    • /
    • pp.625-633
    • /
    • 2018
  • Realm is an open source database developed to replace SQLite, which is commonly used in mobile devices. The data stored in the database must be checked during the digital forensic analysis process for mobile devices because it can help to understand the behavior of the user and whether the mobile device is operating or not. In addition, since the user can intentionally use anti-forensic techniques such as deleting data stored in the database, research on how to recover deleted records is needed. In this paper, we propose a method to recover records that have not been overwritten after deletion based on the analysis of the structure and record and deletion process of the Realm database file.

Detection of Frame Deletion Using Convolutional Neural Network (CNN 기반 동영상의 프레임 삭제 검출 기법)

  • Hong, Jin Hyung;Yang, Yoonmo;Oh, Byung Tae
    • Journal of Broadcast Engineering
    • /
    • v.23 no.6
    • /
    • pp.886-895
    • /
    • 2018
  • In this paper, we introduce a technique to detect the video forgery by using the regularity that occurs in the video compression process. The proposed method uses the hierarchical regularity lost by the video double compression and the frame deletion. In order to extract such irregularities, the depth information of CU and TU, which are basic units of HEVC, is used. For improving performance, we make a depth map of CU and TU using local information, and then create input data by grouping them in GoP units. We made a decision whether or not the video is double-compressed and forged by using a general three-dimensional convolutional neural network. Experimental results show that it is more effective to detect whether or not the video is forged compared with the results using the existing machine learning algorithm.

Indexing Methods of Splitting XML Documents (XML 문서의 분할 인덱스 기법)

  • Kim, Jong-Myung;Jin, Min
    • Journal of Korea Multimedia Society
    • /
    • v.6 no.3
    • /
    • pp.397-408
    • /
    • 2003
  • Existing indexing mechanisms of XML data using numbering scheme have a drawback of rebuilding the entire index structure when insertion, deletion, and update occurs on the data. We propose a new indexing mechanism based on split blocks to cope with this problem. The XML data are split into blocks, where there exists at most a relationship between two blocks, and numbering scheme is applied to each block. This mechanism reduces the overhead of rebuilding index structures when insertion, deletion, and update occurs on the data. We also propose two algorithms, Parent-Child Block Merge Algorithm and Ancestor-Descendent Algorithm which retrieve the relationship between two entities in the XML hierarchy using this indexing mechanism. We also propose a mechanism in which the identifier of a block has the information of its Parents' block to expedite retrieval process of the ancestor-descendent relationship and also propose two algorithms. Parent-Child Block Merge Algorithm and Ancestor-Descendent Algorithm using this indexing mechanism.

  • PDF

Design of Low Complexity Human Anxiety Classification Model based on Machine Learning (기계학습 기반 저 복잡도 긴장 상태 분류 모델)

  • Hong, Eunjae;Park, Hyunggon
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.66 no.9
    • /
    • pp.1402-1408
    • /
    • 2017
  • Recently, services for personal biometric data analysis based on real-time monitoring systems has been increasing and many of them have focused on recognition of emotions. In this paper, we propose a classification model to classify anxiety emotion using biometric data actually collected from people. We propose to deploy the support vector machine to build a classification model. In order to improve the classification accuracy, we propose two data pre-processing procedures, which are normalization and data deletion. The proposed algorithms are actually implemented based on Real-time Traffic Flow Measurement structure, which consists of data collection module, data preprocessing module, and creating classification model module. Our experiment results show that the proposed classification model can infers anxiety emotions of people with the accuracy of 65.18%. Moreover, the proposed model with the proposed pre-processing techniques shows the improved accuracy, which is 78.77%. Therefore, we can conclude that the proposed classification model based on the pre-processing process can improve the classification accuracy with lower computation complexity.