• Title/Summary/Keyword: 값 형태 복구

Search Result 15, Processing Time 0.02 seconds

Audio Texture Synthesis using EM Optimization (EM 최적화를 이용한 오디오 텍스처 합성)

  • Roe, Chang-Hwan;Yoo, Min-Joon;Lee, In-Kwon
    • 한국HCI학회:학술대회논문집
    • /
    • 2007.02a
    • /
    • pp.274-280
    • /
    • 2007
  • 오디오 텍스처 합성은 주어진 짧은 오디오 클립으로부터 임의의 길이를 갖는 새로운 오디오 클립을 생성하는 방법이다. 이는 애니메이션이나 영화에서 비디오와 정확한 동기화를 이루는 사운드 효과를, 혹은 임의의 길이를 갖는 배경 음악을 효율적으로 만들 수 있는 방법이다. 최근 Lie Lu는 주어진 예제 오디오 클립을 여러 조각으로 나눈 후, 이 조각들을 그래프 형태로 연결하고, 생성된 그래프를 탐색하면서 임의의 길이를 가지는 오디오 클립을 합성하는 방법을 제안하였다. 비교적 간단한 방법으로도 원본 오디오 클립과 비슷한 느낌의 오디오 클립을 만들어낸다는 장점이 있지만, 이는 원본 내의 여러 오디오 조각들이 단지 지속적으로 연결되는 형태로 합성되기 때문에 종종 반복되는 느낌을 받는다는 단점이 있다. 본 논문에서는 Lie Lu의 방법과는 달리 주어진 예제 오디오 클립을 직접 합성함으로써 반복성을 줄이면서도 원본과 비슷한 느낌을 갖는 결과 오디오 클립을 생성할 수 있는 방법을 제안한다. 특히 본 논문에서는 정확한 합성을 위하여 EM 최적화 방법을 사용한다. 본 논문에서 제안하는 합성 방법은 먼저 예제 오디오 클립을 일정 단위로 나누고 이렇게 나눠진 부분들을 일정 길이만큼 서로 겹쳐지게 합성하여 임의의 길이의 오디오 클립을 만든다. 그 후 만들어진 오디오 클립을 예제 오디오 클립과 부분 부분을 비교하여 확장된 오디오 클립과 최대한 비슷한 부분을 예제 오디오 클립에서 찾는다. 그 다음 찾아진 결과를 결과 오디오에 다시 합성하여 오디오 클립을 만든다. 이런 과정을 반복하여 최적화된 가장 적절한 결과값을 구한다. 이 결과는 분할된 부분들이 가장 자연스럽게 이어지는 결과가 된다. 본 논문에서는 최적화를 사용하여 오디오를 합성하기 때문에 합성 결과를 쉽게 조정할 수 있다는 장점이 있다. 최적화 문제에 특정 제약 조건을 넣음으로써 사용자가 원하는 부분의 음악이 결과 사운드의 특정 부분에 위치 할 수 있게 하고 이로써 특정 흐름을 만들어낼 수 있으며, 일부가 손실된 사운드 데이터의 복구를 가능하게 하는 등의 결과를 생성할 수 있다. EM 최적화를 사용한 오디오 텍스처 합성 방법은 기존의 합성 방법에 비해 질적인 측면에서 보다 좋은 결과를 생성할 수 있고, 비교적 반복이 덜한 패턴들을 만들어 낼 수 있다. 이를 입증하기 위해 이에 대한 사용자 설문 조사 결과가 제시된다.

  • PDF

Evaluation of Domestic and Foreign Design Standards for Soil Nailing Method by Analysis of Slope Restoration Case (비탈면 복구사례 분석을 통한 쏘일네일링 공법의 국내외 설계기준 평가)

  • You, Kwang-Ho;Kim, Tae-Won
    • Journal of the Korean GEO-environmental Society
    • /
    • v.20 no.11
    • /
    • pp.11-22
    • /
    • 2019
  • Limit state design (LSD) and allowable stress design (ASD) are two main types of soil nailing design methodologies. In the LSD method, stability is determined by applying individual coefficients to ground strength, working load and etc. The ASD method calculates the safety factor and compares it with the minimum safety factor to determine the stability. The global design trend of soil nailing system is changing from the ASD method to the LSD method. The design method in Korea still adopts the ASD philosophy while others mostly do the limit state design. In this study, four soil nail design methods, 'FHWA GEC 7' in U.S. (2015), 'Clouterre' in France (1991), 'Soil nailing - best practice guidance' in U.K. (2005), 'Geoguide 7' in Hongkong (2008), and 'Design guide for slope in construction work' in Korea (2016) were applied to the evaluation of the stability and the results were analyzed comparatively in brief. It is revealed that the design method of 'the overall stability of soil nail walls' in Korea is the most conservative and next those by FHWA, Clouterre and CIRIA become more conservative in order. However, the difference of results obtained from FHWA and Clouterre is negligible. Also, this study found out that efforts to improve domestic design criterion are needed.

Analysis and Performance Evaluation of Pattern Condensing Techniques used in Representative Pattern Mining (대표 패턴 마이닝에 활용되는 패턴 압축 기법들에 대한 분석 및 성능 평가)

  • Lee, Gang-In;Yun, Un-Il
    • Journal of Internet Computing and Services
    • /
    • v.16 no.2
    • /
    • pp.77-83
    • /
    • 2015
  • Frequent pattern mining, which is one of the major areas actively studied in data mining, is a method for extracting useful pattern information hidden from large data sets or databases. Moreover, frequent pattern mining approaches have been actively employed in a variety of application fields because the results obtained from them can allow us to analyze various, important characteristics within databases more easily and automatically. However, traditional frequent pattern mining methods, which simply extract all of the possible frequent patterns such that each of their support values is not smaller than a user-given minimum support threshold, have the following problems. First, traditional approaches have to generate a numerous number of patterns according to the features of a given database and the degree of threshold settings, and the number can also increase in geometrical progression. In addition, such works also cause waste of runtime and memory resources. Furthermore, the pattern results excessively generated from the methods also lead to troubles of pattern analysis for the mining results. In order to solve such issues of previous traditional frequent pattern mining approaches, the concept of representative pattern mining and its various related works have been proposed. In contrast to the traditional ones that find all the possible frequent patterns from databases, representative pattern mining approaches selectively extract a smaller number of patterns that represent general frequent patterns. In this paper, we describe details and characteristics of pattern condensing techniques that consider the maximality or closure property of generated frequent patterns, and conduct comparison and analysis for the techniques. Given a frequent pattern, satisfying the maximality for the pattern signifies that all of the possible super sets of the pattern must have smaller support values than a user-specific minimum support threshold; meanwhile, satisfying the closure property for the pattern means that there is no superset of which the support is equal to that of the pattern with respect to all the possible super sets. By mining maximal frequent patterns or closed frequent ones, we can achieve effective pattern compression and also perform mining operations with much smaller time and space resources. In addition, compressed patterns can be converted into the original frequent pattern forms again if necessary; especially, the closed frequent pattern notation has the ability to convert representative patterns into the original ones again without any information loss. That is, we can obtain a complete set of original frequent patterns from closed frequent ones. Although the maximal frequent pattern notation does not guarantee a complete recovery rate in the process of pattern conversion, it has an advantage that can extract a smaller number of representative patterns more quickly compared to the closed frequent pattern notation. In this paper, we show the performance results and characteristics of the aforementioned techniques in terms of pattern generation, runtime, and memory usage by conducting performance evaluation with respect to various real data sets collected from the real world. For more exact comparison, we also employ the algorithms implementing these techniques on the same platform and Implementation level.

Detection of Wildfire Burned Areas in California Using Deep Learning and Landsat 8 Images (딥러닝과 Landsat 8 영상을 이용한 캘리포니아 산불 피해지 탐지)

  • Youngmin Seo;Youjeong Youn;Seoyeon Kim;Jonggu Kang;Yemin Jeong;Soyeon Choi;Yungyo Im;Yangwon Lee
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.6_1
    • /
    • pp.1413-1425
    • /
    • 2023
  • The increasing frequency of wildfires due to climate change is causing extreme loss of life and property. They cause loss of vegetation and affect ecosystem changes depending on their intensity and occurrence. Ecosystem changes, in turn, affect wildfire occurrence, causing secondary damage. Thus, accurate estimation of the areas affected by wildfires is fundamental. Satellite remote sensing is used for forest fire detection because it can rapidly acquire topographic and meteorological information about the affected area after forest fires. In addition, deep learning algorithms such as convolutional neural networks (CNN) and transformer models show high performance for more accurate monitoring of fire-burnt regions. To date, the application of deep learning models has been limited, and there is a scarcity of reports providing quantitative performance evaluations for practical field utilization. Hence, this study emphasizes a comparative analysis, exploring performance enhancements achieved through both model selection and data design. This study examined deep learning models for detecting wildfire-damaged areas using Landsat 8 satellite images in California. Also, we conducted a comprehensive comparison and analysis of the detection performance of multiple models, such as U-Net and High-Resolution Network-Object Contextual Representation (HRNet-OCR). Wildfire-related spectral indices such as normalized difference vegetation index (NDVI) and normalized burn ratio (NBR) were used as input channels for the deep learning models to reflect the degree of vegetation cover and surface moisture content. As a result, the mean intersection over union (mIoU) was 0.831 for U-Net and 0.848 for HRNet-OCR, showing high segmentation performance. The inclusion of spectral indices alongside the base wavelength bands resulted in increased metric values for all combinations, affirming that the augmentation of input data with spectral indices contributes to the refinement of pixels. This study can be applied to other satellite images to build a recovery strategy for fire-burnt areas.

Design and Implementation of MongoDB-based Unstructured Log Processing System over Cloud Computing Environment (클라우드 환경에서 MongoDB 기반의 비정형 로그 처리 시스템 설계 및 구현)

  • Kim, Myoungjin;Han, Seungho;Cui, Yun;Lee, Hanku
    • Journal of Internet Computing and Services
    • /
    • v.14 no.6
    • /
    • pp.71-84
    • /
    • 2013
  • Log data, which record the multitude of information created when operating computer systems, are utilized in many processes, from carrying out computer system inspection and process optimization to providing customized user optimization. In this paper, we propose a MongoDB-based unstructured log processing system in a cloud environment for processing the massive amount of log data of banks. Most of the log data generated during banking operations come from handling a client's business. Therefore, in order to gather, store, categorize, and analyze the log data generated while processing the client's business, a separate log data processing system needs to be established. However, the realization of flexible storage expansion functions for processing a massive amount of unstructured log data and executing a considerable number of functions to categorize and analyze the stored unstructured log data is difficult in existing computer environments. Thus, in this study, we use cloud computing technology to realize a cloud-based log data processing system for processing unstructured log data that are difficult to process using the existing computing infrastructure's analysis tools and management system. The proposed system uses the IaaS (Infrastructure as a Service) cloud environment to provide a flexible expansion of computing resources and includes the ability to flexibly expand resources such as storage space and memory under conditions such as extended storage or rapid increase in log data. Moreover, to overcome the processing limits of the existing analysis tool when a real-time analysis of the aggregated unstructured log data is required, the proposed system includes a Hadoop-based analysis module for quick and reliable parallel-distributed processing of the massive amount of log data. Furthermore, because the HDFS (Hadoop Distributed File System) stores data by generating copies of the block units of the aggregated log data, the proposed system offers automatic restore functions for the system to continually operate after it recovers from a malfunction. Finally, by establishing a distributed database using the NoSQL-based Mongo DB, the proposed system provides methods of effectively processing unstructured log data. Relational databases such as the MySQL databases have complex schemas that are inappropriate for processing unstructured log data. Further, strict schemas like those of relational databases cannot expand nodes in the case wherein the stored data are distributed to various nodes when the amount of data rapidly increases. NoSQL does not provide the complex computations that relational databases may provide but can easily expand the database through node dispersion when the amount of data increases rapidly; it is a non-relational database with an appropriate structure for processing unstructured data. The data models of the NoSQL are usually classified as Key-Value, column-oriented, and document-oriented types. Of these, the representative document-oriented data model, MongoDB, which has a free schema structure, is used in the proposed system. MongoDB is introduced to the proposed system because it makes it easy to process unstructured log data through a flexible schema structure, facilitates flexible node expansion when the amount of data is rapidly increasing, and provides an Auto-Sharding function that automatically expands storage. The proposed system is composed of a log collector module, a log graph generator module, a MongoDB module, a Hadoop-based analysis module, and a MySQL module. When the log data generated over the entire client business process of each bank are sent to the cloud server, the log collector module collects and classifies data according to the type of log data and distributes it to the MongoDB module and the MySQL module. The log graph generator module generates the results of the log analysis of the MongoDB module, Hadoop-based analysis module, and the MySQL module per analysis time and type of the aggregated log data, and provides them to the user through a web interface. Log data that require a real-time log data analysis are stored in the MySQL module and provided real-time by the log graph generator module. The aggregated log data per unit time are stored in the MongoDB module and plotted in a graph according to the user's various analysis conditions. The aggregated log data in the MongoDB module are parallel-distributed and processed by the Hadoop-based analysis module. A comparative evaluation is carried out against a log data processing system that uses only MySQL for inserting log data and estimating query performance; this evaluation proves the proposed system's superiority. Moreover, an optimal chunk size is confirmed through the log data insert performance evaluation of MongoDB for various chunk sizes.