• Title/Summary/Keyword: Memory usage

Search Result 408, Processing Time 0.026 seconds

Algorithm to Search for the Original Song from a Cover Song Using Inflection Points of the Melody Line (멜로디 라인의 변곡점을 활용한 커버곡의 원곡 검색 알고리즘)

  • Lee, Bo Hyun;Kim, Myung
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.10 no.5
    • /
    • pp.195-200
    • /
    • 2021
  • Due to the development of video sharing platforms, the amount of video uploads is exploding. Such videos often include various types of music, among which cover songs are included. In order to protect the copyright of music, an algorithm to find the original song of the cover song is essential. However, it is not easy to find the original song because the cover song is a modification of the composition, speed and overall structure of the original song. So far, there is no known effective algorithm for searching the original song of the cover song. In this paper, we propose an algorithm for searching the original song of the cover song using the inflection points of the melody line. Inflection points represent the characteristic points of change in the melody sequence. The proposed algorithm compares the original song and the cover song using the sequence of inflection points for the representative phrase of the original song. Since the characteristics of the representative phrase are used, even if the cover song is a song made by modifying the overall composition of the song, the algorithm's search performance is excellent. Also, since the proposed algorithm uses only the features of the inflection point sequence, the memory usage is very low. The efficiency of the algorithm was verified through performance evaluation.

Development of 3D Reverse Time Migration Software for Ultra-high-resolution Seismic Survey (초고해상 탄성파 탐사를 위한 3차원 역시간 구조보정 프로그램 개발)

  • Kim, Dae-sik;Shin, Jungkyun;Ha, Jiho;Kang, Nyeon Keon;Oh, Ju-Won
    • Geophysics and Geophysical Exploration
    • /
    • v.25 no.3
    • /
    • pp.109-119
    • /
    • 2022
  • The computational efficiency of reverse time migration (RTM) based on numerical modeling is not secured due to the high-frequency band of several hundred Hz or higher for data acquired through a three-dimensional (3D) ultra-high-resolution (UHR) seismic survey. Therefore, this study develops an RTM program to derive high-quality 3D geological structures using UHR seismic data. In the traditional 3D RTM program, an excitation amplitude technique that stores only the maximum amplitude of the source wavefield and a domain-limiting technique that minimizes the modeling area where the source and receivers are located were used to significantly reduce memory usage and calculation time. The program developed through this study successfully derived a 3D migration image with a horizontal grid size of 1 m for the 3D UHR seismic survey data obtained from the Korea Institute of Geoscience and Mineral Resources in 2019, and geological analysis was conducted.

Design and Forensic Analysis of a Zero Trust Model for Amazon S3 (Amazon S3 제로 트러스트 모델 설계 및 포렌식 분석)

  • Kyeong-Hyun Cho;Jae-Han Cho;Hyeon-Woo Lee;Jiyeon Kim
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.33 no.2
    • /
    • pp.295-303
    • /
    • 2023
  • As the cloud computing market grows, a variety of cloud services are now reliably delivered. Administrative agencies and public institutions of South Korea are transferring all their information systems to cloud systems. It is essential to develop security solutions in advance in order to safely operate cloud services, as protecting cloud services from misuse and malicious access by insiders and outsiders over the Internet is challenging. In this paper, we propose a zero trust model for cloud storage services that store sensitive data. We then verify the effectiveness of the proposed model by operating a cloud storage service. Memory, web, and network forensics are also performed to track access and usage of cloud users depending on the adoption of the zero trust model. As a cloud storage service, we use Amazon S3(Simple Storage Service) and deploy zero trust techniques such as access control lists and key management systems. In order to consider the different types of access to S3, furthermore, we generate service requests inside and outside AWS(Amazon Web Services) and then analyze the results of the zero trust techniques depending on the location of the service request.

Analysis and Performance Evaluation of Pattern Condensing Techniques used in Representative Pattern Mining (대표 패턴 마이닝에 활용되는 패턴 압축 기법들에 대한 분석 및 성능 평가)

  • Lee, Gang-In;Yun, Un-Il
    • Journal of Internet Computing and Services
    • /
    • v.16 no.2
    • /
    • pp.77-83
    • /
    • 2015
  • Frequent pattern mining, which is one of the major areas actively studied in data mining, is a method for extracting useful pattern information hidden from large data sets or databases. Moreover, frequent pattern mining approaches have been actively employed in a variety of application fields because the results obtained from them can allow us to analyze various, important characteristics within databases more easily and automatically. However, traditional frequent pattern mining methods, which simply extract all of the possible frequent patterns such that each of their support values is not smaller than a user-given minimum support threshold, have the following problems. First, traditional approaches have to generate a numerous number of patterns according to the features of a given database and the degree of threshold settings, and the number can also increase in geometrical progression. In addition, such works also cause waste of runtime and memory resources. Furthermore, the pattern results excessively generated from the methods also lead to troubles of pattern analysis for the mining results. In order to solve such issues of previous traditional frequent pattern mining approaches, the concept of representative pattern mining and its various related works have been proposed. In contrast to the traditional ones that find all the possible frequent patterns from databases, representative pattern mining approaches selectively extract a smaller number of patterns that represent general frequent patterns. In this paper, we describe details and characteristics of pattern condensing techniques that consider the maximality or closure property of generated frequent patterns, and conduct comparison and analysis for the techniques. Given a frequent pattern, satisfying the maximality for the pattern signifies that all of the possible super sets of the pattern must have smaller support values than a user-specific minimum support threshold; meanwhile, satisfying the closure property for the pattern means that there is no superset of which the support is equal to that of the pattern with respect to all the possible super sets. By mining maximal frequent patterns or closed frequent ones, we can achieve effective pattern compression and also perform mining operations with much smaller time and space resources. In addition, compressed patterns can be converted into the original frequent pattern forms again if necessary; especially, the closed frequent pattern notation has the ability to convert representative patterns into the original ones again without any information loss. That is, we can obtain a complete set of original frequent patterns from closed frequent ones. Although the maximal frequent pattern notation does not guarantee a complete recovery rate in the process of pattern conversion, it has an advantage that can extract a smaller number of representative patterns more quickly compared to the closed frequent pattern notation. In this paper, we show the performance results and characteristics of the aforementioned techniques in terms of pattern generation, runtime, and memory usage by conducting performance evaluation with respect to various real data sets collected from the real world. For more exact comparison, we also employ the algorithms implementing these techniques on the same platform and Implementation level.

Conflicts between the Conservation and Removal of the Modern Historic Landscapes - A Case of the Demolition Controversy of the Japanese General Government Building in Seoul - (근대 역사 경관의 보존과 철거 - 구 조선총독부 철거 논쟁을 사례로 -)

  • Son, Eun-Shin;Pae, Jeong-Hann
    • Journal of the Korean Institute of Landscape Architecture
    • /
    • v.46 no.4
    • /
    • pp.21-35
    • /
    • 2018
  • In recent years, there has been a tendency to reuse 'landscapes of memory,' including industrial heritages, modern cultural heritages, and post-industrial parks, as public spaces in many cities. Among the various types of landscapes, 'modern historic landscapes', which were formed in the 19th and 20th centuries, are landscapes where the debate between conservation and removal is most frequent, according to the change of evaluation and recognition of modern history. This study examines conflicts between conservation and removal around modern historic landscapes and explores the value judgment criteria and the process of formation of those landscapes, as highlighted in the case of the demolition controversy of the old Japanese general government building in Seoul, which was dismantled in 1995. First, this study reviews newspaper articles, television news and debate programs from 1980-1999 and some articles related to the controversy of the Japanese general government building. Then it draws the following six factors as the main issues of the demolition controversy of the building: symbolic location, discoveries and responses of new historical facts, reaction and intervention of a related country, financial conditions, function and usage of the landscape, changes of urban, historical and architectural policies. Based on these issues, this study examines the conflicts between symbolic values that play an important role in the formation of modern historic landscapes and determines conservation or removal, and the utility of functional values that solve the problems and respond to criticisms that arise in the process of forming the modern historic landscape. Especially, it is noted that the most important factor that makes the decision is the symbolic values, although the determination of the conservation or removal of modern historic landscapes has changed according to changes in historical perceptions of modern history. Today, the modern historic landscape is an important site for urban design, and still has historical issues to be agreed upon and addressed. Thi study has contemporary significance from the point that it divides the many values of modern historic landscapes into symbolic values and functional values, evaluates these, and reviews the background social context.

An Analysis of the Roles of Experience in Information System Continuance (정보시스템의 지속적 사용에서 경험의 역할에 대한 분석)

  • Lee, Woong-Kyu
    • Asia pacific journal of information systems
    • /
    • v.21 no.4
    • /
    • pp.45-62
    • /
    • 2011
  • The notion of information systems (IS) continuance has recently emerged as one of the most important research issues in the field of IS. A great deal of research has been conducted thus far on the basis of theories adapted from various disciplines including consumer behaviors and social psychology, in addition to theories regarding information technology (IT) acceptance. This previous body of knowledge provides a robust research framework that can already account for the determination of IS continuance; however, this research points to other, thus-far-unelucidated determinant factors such as habit, which were not included in traditional IT acceptance frameworks, and also re-emphasizes the importance of emotion-related constructs such as satisfaction in addition to conscious intention with rational beliefs such as usefulness. Experiences should also be considered one of the most important factors determining the characteristics of information system (IS) continuance and the features distinct from those determining IS acceptance, because more experienced users may have more opportunities for IS use, which would allow them more frequent use than would be available to less experienced or non-experienced users. Interestingly, experience has dual features that may contradictorily influence IS use. On one hand, attitudes predicated on direct experience have been shown to predict behavior better than attitudes from indirect experience or without experience; as more information is available, direct experience may render IS use a more salient behavior, and may also make IS use more accessible via memory. Therefore, experience may serve to intensify the relationship between IS use and conscious intention with evaluations, On the other hand, experience may culminate in the formation of habits: greater experience may also imply more frequent performance of the behavior, which may lead to the formation of habits, Hence, like experience, users' activation of an IS may be more dependent on habit-that is, unconscious automatic use without deliberation regarding the IS-and less dependent on conscious intentions, Furthermore, experiences can provide basic information necessary for satisfaction with the use of a specific IS, thus spurring the formation of both conscious intentions and unconscious habits, Whereas IT adoption Is a one-time decision, IS continuance may be a series of users' decisions and evaluations based on satisfaction with IS use. Moreover. habits also cannot be formed without satisfaction, even when a behavior is carried out repeatedly. Thus, experiences also play a critical role in satisfaction, as satisfaction is the consequence of direct experiences of actual behaviors. In particular, emotional experiences such as enjoyment can become as influential on IS use as are utilitarian experiences such as usefulness; this is especially true in light of the modern increase in membership-based hedonic systems - including online games, web-based social network services (SNS), blogs, and portals-all of which attempt to provide users with self-fulfilling value. Therefore, in order to understand more clearly the role of experiences in IS continuance, analysis must be conducted under a research framework that includes intentions, habits, and satisfaction, as experience may not only have duration-based moderating effects on the relationship between both intention and habit and the activation of IS use, but may also have content-based positive effects on satisfaction. This is consistent with the basic assumptions regarding the determining factors in IS continuance as suggested by Oritz de Guinea and Markus: consciousness, emotion, and habit. The principal objective of this study was to explore and assess the effects of experiences in IS continuance, with special consideration given to conscious intentions and unconscious habits, as well as satisfaction. IN service of this goal, along with a review of the relevant literature regarding the effects of experiences and habit on continuous IS use, this study suggested a research model that represents the roles of experience: its moderating role in the relationships of IS continuance with both conscious intention and unconscious habit, and its antecedent role in the development of satisfaction. For the validation of this research model. Korean university student users of 'Cyworld', one of the most influential social network services in South Korea, were surveyed, and the data were analyzed via partial least square (PLS) analysis to assess the implications of this study. In result most hypotheses in our research model were statistically supported with the exception of one. Although one hypothesis was not supported, the study's findings provide us with some important implications. First the role of experience in IS continuance differs from its role in IS acceptance. Second, the use of IS was explained by the dynamic balance between habit and intention. Third, the importance of satisfaction was confirmed from the perspective of IS continuance with experience.

Finding Weighted Sequential Patterns over Data Streams via a Gap-based Weighting Approach (발생 간격 기반 가중치 부여 기법을 활용한 데이터 스트림에서 가중치 순차패턴 탐색)

  • Chang, Joong-Hyuk
    • Journal of Intelligence and Information Systems
    • /
    • v.16 no.3
    • /
    • pp.55-75
    • /
    • 2010
  • Sequential pattern mining aims to discover interesting sequential patterns in a sequence database, and it is one of the essential data mining tasks widely used in various application fields such as Web access pattern analysis, customer purchase pattern analysis, and DNA sequence analysis. In general sequential pattern mining, only the generation order of data element in a sequence is considered, so that it can easily find simple sequential patterns, but has a limit to find more interesting sequential patterns being widely used in real world applications. One of the essential research topics to compensate the limit is a topic of weighted sequential pattern mining. In weighted sequential pattern mining, not only the generation order of data element but also its weight is considered to get more interesting sequential patterns. In recent, data has been increasingly taking the form of continuous data streams rather than finite stored data sets in various application fields, the database research community has begun focusing its attention on processing over data streams. The data stream is a massive unbounded sequence of data elements continuously generated at a rapid rate. In data stream processing, each data element should be examined at most once to analyze the data stream, and the memory usage for data stream analysis should be restricted finitely although new data elements are continuously generated in a data stream. Moreover, newly generated data elements should be processed as fast as possible to produce the up-to-date analysis result of a data stream, so that it can be instantly utilized upon request. To satisfy these requirements, data stream processing sacrifices the correctness of its analysis result by allowing some error. Considering the changes in the form of data generated in real world application fields, many researches have been actively performed to find various kinds of knowledge embedded in data streams. They mainly focus on efficient mining of frequent itemsets and sequential patterns over data streams, which have been proven to be useful in conventional data mining for a finite data set. In addition, mining algorithms have also been proposed to efficiently reflect the changes of data streams over time into their mining results. However, they have been targeting on finding naively interesting patterns such as frequent patterns and simple sequential patterns, which are found intuitively, taking no interest in mining novel interesting patterns that express the characteristics of target data streams better. Therefore, it can be a valuable research topic in the field of mining data streams to define novel interesting patterns and develop a mining method finding the novel patterns, which will be effectively used to analyze recent data streams. This paper proposes a gap-based weighting approach for a sequential pattern and amining method of weighted sequential patterns over sequence data streams via the weighting approach. A gap-based weight of a sequential pattern can be computed from the gaps of data elements in the sequential pattern without any pre-defined weight information. That is, in the approach, the gaps of data elements in each sequential pattern as well as their generation orders are used to get the weight of the sequential pattern, therefore it can help to get more interesting and useful sequential patterns. Recently most of computer application fields generate data as a form of data streams rather than a finite data set. Considering the change of data, the proposed method is mainly focus on sequence data streams.

Performance analysis of Frequent Itemset Mining Technique based on Transaction Weight Constraints (트랜잭션 가중치 기반의 빈발 아이템셋 마이닝 기법의 성능분석)

  • Yun, Unil;Pyun, Gwangbum
    • Journal of Internet Computing and Services
    • /
    • v.16 no.1
    • /
    • pp.67-74
    • /
    • 2015
  • In recent years, frequent itemset mining for considering the importance of each item has been intensively studied as one of important issues in the data mining field. According to strategies utilizing the item importance, itemset mining approaches for discovering itemsets based on the item importance are classified as follows: weighted frequent itemset mining, frequent itemset mining using transactional weights, and utility itemset mining. In this paper, we perform empirical analysis with respect to frequent itemset mining algorithms based on transactional weights. The mining algorithms compute transactional weights by utilizing the weight for each item in large databases. In addition, these algorithms discover weighted frequent itemsets on the basis of the item frequency and weight of each transaction. Consequently, we can see the importance of a certain transaction through the database analysis because the weight for the transaction has higher value if it contains many items with high values. We not only analyze the advantages and disadvantages but also compare the performance of the most famous algorithms in the frequent itemset mining field based on the transactional weights. As a representative of the frequent itemset mining using transactional weights, WIS introduces the concept and strategies of transactional weights. In addition, there are various other state-of-the-art algorithms, WIT-FWIs, WIT-FWIs-MODIFY, and WIT-FWIs-DIFF, for extracting itemsets with the weight information. To efficiently conduct processes for mining weighted frequent itemsets, three algorithms use the special Lattice-like data structure, called WIT-tree. The algorithms do not need to an additional database scanning operation after the construction of WIT-tree is finished since each node of WIT-tree has item information such as item and transaction IDs. In particular, the traditional algorithms conduct a number of database scanning operations to mine weighted itemsets, whereas the algorithms based on WIT-tree solve the overhead problem that can occur in the mining processes by reading databases only one time. Additionally, the algorithms use the technique for generating each new itemset of length N+1 on the basis of two different itemsets of length N. To discover new weighted itemsets, WIT-FWIs performs the itemset combination processes by using the information of transactions that contain all the itemsets. WIT-FWIs-MODIFY has a unique feature decreasing operations for calculating the frequency of the new itemset. WIT-FWIs-DIFF utilizes a technique using the difference of two itemsets. To compare and analyze the performance of the algorithms in various environments, we use real datasets of two types (i.e., dense and sparse) in terms of the runtime and maximum memory usage. Moreover, a scalability test is conducted to evaluate the stability for each algorithm when the size of a database is changed. As a result, WIT-FWIs and WIT-FWIs-MODIFY show the best performance in the dense dataset, and in sparse dataset, WIT-FWI-DIFF has mining efficiency better than the other algorithms. Compared to the algorithms using WIT-tree, WIS based on the Apriori technique has the worst efficiency because it requires a large number of computations more than the others on average.