• Title/Summary/Keyword: Analysis Techniques

Search Result 10,436, Processing Time 0.037 seconds

Initial development of wireless acoustic emission sensor Motes for civil infrastructure state monitoring

  • Grosse, Christian U.;Glaser, Steven D.;Kruger, Markus
    • Smart Structures and Systems
    • /
    • v.6 no.3
    • /
    • pp.197-209
    • /
    • 2010
  • The structural state of a bridge is currently examined by visual inspection or by wired sensor techniques, which are relatively expensive, vulnerable to inclement conditions, and time consuming to undertake. In contrast, wireless sensor networks are easy to deploy and flexible in application so that the network can adjust to the individual structure. Different sensing techniques have been used with such networks, but the acoustic emission technique has rarely been utilized. With the use of acoustic emission (AE) techniques it is possible to detect internal structural damage, from cracks propagating during the routine use of a structure, e.g. breakage of prestressing wires. To date, AE data analysis techniques are not appropriate for the requirements of a wireless network due to the very exact time synchronization needed between multiple sensors, and power consumption issues. To unleash the power of the acoustic emission technique on large, extended structures, recording and local analysis techniques need better algorithms to handle and reduce the immense amount of data generated. Preliminary results from utilizing a new concept called Acoustic Emission Array Processing to locally reduce data to information are presented. Results show that the azimuthal location of a seismic source can be successfully identified, using an array of six to eight poor-quality AE sensors arranged in a circular array approximately 200 mm in diameter. AE beamforming only requires very fine time synchronization of the sensors within a single array, relative timing between sensors of $1{\mu}s$ can easily be performed by a single Mote servicing the array. The method concentrates the essence of six to eight extended waveforms into a single value to be sent through the wireless network, resulting in power savings by avoiding extended radio transmission.

A review of the Implementation of Functional Brain Imaging Techniques in Auditory Research focusing on Hearing Loss (청각 연구에서 기능적 뇌 영상 기술 적용에 대한 고찰: 난청을 중심으로)

  • Hye Yoon Seol;Jaeyoung Shin
    • Journal of Biomedical Engineering Research
    • /
    • v.45 no.1
    • /
    • pp.26-36
    • /
    • 2024
  • Functional brain imaging techniques have been used to diagnose psychiatric disorders such as dementia, depression, and autism. Recently, these techniques have also been actively used to study hearing loss. The present study reviewed the application of the functional brain imaging techniques in auditory research, especially those focusing on hearing loss, over the past decade. EEG, fMRI, fNIRS, MEG, and PET have been utilized in auditory research, and the number of research studies using these techniques has been increasing. In particular, fMRI and EEG were the most frequently used technique in auditory research. EEG studies mostly used event-related designs to analyze the direct relationship between stimulus and the related response, and in fMRI studies, resting-state functional connectivity and block designs were utilized to analyze alterations in brain functionality in hearing-related areas. In terms of age, while studies involving children mainly focused on congenital and pre- and post-lingual hearing loss to analyze developmental characteristics with and without hearing loss, those involving adults focused on age-related hearing loss to investigate changes in the characteristics of the brain based on the presence of hearing loss and the use of a hearing device. Overall, ranging from EEG to PET, various functional brain imaging techniques have been used in auditory research, but it is difficult to perform a comprehensive analysis due to the lack of consistency in experimental designs, analysis methods, and participant characteristics. Thus, it is necessary to develop standardized research protocols to obtain high-quality clinical and research evidence.

Pooling-Across-Environments Method for the Generation of Composite-Material Allowables (환경조건간 합동을 이용한 복합재료 허용치 생성 기법)

  • Rhee, Seung Yun
    • Journal of Aerospace System Engineering
    • /
    • v.10 no.3
    • /
    • pp.63-69
    • /
    • 2016
  • The properties of composite materials, when compared to those of metallic materials, are highly variable due to many factors including the batch-to-batch variability of raw materials, the prepreg manufacturing process, material handling, part-fabrication techniques, ply-stacking sequences, environmental conditions, and test procedures. It is therefore necessary to apply reliable statistical-analysis techniques to obtain the design allowables of composite materials. A new composite-material qualification process has been developed by the Advanced General Aviation Transport Experiments (AGATE) consortium to yield the lamina-design allowables of composite materials according to standardized coupon-level tests and statistical techniques; moreover, the generated allowables database can be shared among multiple users without a repeating of the full qualification procedure by each user. In 2005, NASA established the National Center for Advanced Materials Performance (NCAMP) with the purpose of refining and enhancing the AGATE process to a self-sustaining level to serve the entire aerospace industry. In this paper, the statistical techniques and procedures for the generation of the allowables of aerospace composite materials will be discussed with a focus on the pooling-across-environments method.

Performance Comparision Analysis of DS-CDMA and PH-CDMA in Satellite Communication System (위성통신시스템에서의 DS-CDMA 와 FH-CDMA의 성능 비교분석)

  • 이양선;강희조
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2000.05a
    • /
    • pp.183-187
    • /
    • 2000
  • In this paper, the performance of DS-CDMA system and FH-CDMA system has been comparision analyzed in a channel environment which is characterized by Multi-User environment and Rayleigh fading environment. The techniques of improvement of the performance has been comperision analysis evaluated when adopting MRC diversity techniques. In same condition, We are analyzed the BER(Bit Error Rate) as increase Use's number due to comparing performance of two systems by Same communication band(300KHz), jamming signal(JSR) 10∼20dB, user data-rate 300bps. In the result, the performance of DS and FH systems in multiuser and Rayleigh fading environment is improved performance when adopting MRC diversity techniques. Especially DS system has been improved performance about 9.5 times than FH system when adopting MRC diversity techniques.

  • PDF

Attack Categorization based on Web Application Analysis (웹 어플리케이션 특성 분석을 통한 공격 분류)

  • 서정석;김한성;조상현;차성덕
    • Journal of KIISE:Information Networking
    • /
    • v.30 no.1
    • /
    • pp.97-116
    • /
    • 2003
  • Frequency of attacks on web services and the resulting damage continue to grow as web services become popular. Techniques used in web service attacks are usually different from traditional network intrusion techniques, and techniques to protect web services are badly needed. Unfortunately, conventional intrusion detection systems (IDS), especially those based on known attack signatures, are inadequate in providing reasonable degree of security to web services. An application-level IDS, tailored to web services, is needed to overcome such limitations. The first step in developing web application IDS is to analyze known attacks on web services and characterize them so that anomaly-based intrusion defection becomes possible. In this paper, we classified known attack techniques to web services by analyzing causes, locations where such attack can be easily detected, and the potential risks.

Modern Methods of Text Analysis as an Effective Way to Combat Plagiarism

  • Myronenko, Serhii;Myronenko, Yelyzaveta
    • International Journal of Computer Science & Network Security
    • /
    • v.22 no.8
    • /
    • pp.242-248
    • /
    • 2022
  • The article presents the analysis of modern methods of automatic comparison of original and unoriginal text to detect textual plagiarism. The study covers two types of plagiarism - literal, when plagiarists directly make exact copying of the text without changing anything, and intelligent, using more sophisticated techniques, which are harder to detect due to the text manipulation, like words and signs replacement. Standard techniques related to extrinsic detection are string-based, vector space and semantic-based. The first, most common and most successful target models for detecting literal plagiarism - N-gram and Vector Space are analyzed, and their advantages and disadvantages are evaluated. The most effective target models that allow detecting intelligent plagiarism, particularly identifying paraphrases by measuring the semantic similarity of short components of the text, are investigated. Models using neural network architecture and based on natural language sentence matching approaches such as Densely Interactive Inference Network (DIIN), Bilateral Multi-Perspective Matching (BiMPM) and Bidirectional Encoder Representations from Transformers (BERT) and its family of models are considered. The progress in improving plagiarism detection systems, techniques and related models is summarized. Relevant and urgent problems that remain unresolved in detecting intelligent plagiarism - effective recognition of unoriginal ideas and qualitatively paraphrased text - are outlined.

Application Consideration of Machine Learning Techniques in Satellite Systems

  • Jin-keun Hong
    • International journal of advanced smart convergence
    • /
    • v.13 no.2
    • /
    • pp.48-60
    • /
    • 2024
  • With the exponential growth of satellite data utilization, machine learning has become pivotal in enhancing innovation and cybersecurity in satellite systems. This paper investigates the role of machine learning techniques in identifying and mitigating vulnerabilities and code smells within satellite software. We explore satellite system architecture and survey applications like vulnerability analysis, source code refactoring, and security flaw detection, emphasizing feature extraction methodologies such as Abstract Syntax Trees (AST) and Control Flow Graphs (CFG). We present practical examples of feature extraction and training models using machine learning techniques like Random Forests, Support Vector Machines, and Gradient Boosting. Additionally, we review open-access satellite datasets and address prevalent code smells through systematic refactoring solutions. By integrating continuous code review and refactoring into satellite software development, this research aims to improve maintainability, scalability, and cybersecurity, providing novel insights for the advancement of satellite software development and security. The value of this paper lies in its focus on addressing the identification of vulnerabilities and resolution of code smells in satellite software. In terms of the authors' contributions, we detail methods for applying machine learning to identify potential vulnerabilities and code smells in satellite software. Furthermore, the study presents techniques for feature extraction and model training, utilizing Abstract Syntax Trees (AST) and Control Flow Graphs (CFG) to extract relevant features for machine learning training. Regarding the results, we discuss the analysis of vulnerabilities, the identification of code smells, maintenance, and security enhancement through practical examples. This underscores the significant improvement in the maintainability and scalability of satellite software through continuous code review and refactoring.

Framework Design for Malware Dataset Extraction Using Code Patches in a Hybrid Analysis Environment (코드패치 및 하이브리드 분석 환경을 활용한 악성코드 데이터셋 추출 프레임워크 설계)

  • Ki-Sang Choi;Sang-Hoon Choi;Ki-Woong Park
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.34 no.3
    • /
    • pp.403-416
    • /
    • 2024
  • Malware is being commercialized and sold on the black market, primarily driven by financial incentives. With the increasing demand driven by these sales, the scope of attacks via malware has expanded. In response, there has been a surge in research efforts leveraging artificial intelligence for detection and classification. However, adversaries are integrating various anti-analysis techniques into their malware to thwart analytical efforts. In this study, we introduce the "Malware Analysis with Dynamic Extraction (MADE)" framework, a hybrid binary analysis tool devised to procure datasets from advanced malware incorporating Anti-Analysis techniques. The MADE framework has the proficiency to autonomously execute dynamic analysis on binaries, encompassing those laden with Anti-VM and Anti-Debugging defenses. Experimental results substantiate that the MADE framework can effectively circumvent over 90% of diverse malware implementations using Anti-Analysis techniques and can adeptly extract relevant datasets.

Wavelet Thresholding Techniques to Support Multi-Scale Decomposition for Financial Forecasting Systems

  • Shin, Taeksoo;Han, Ingoo
    • Proceedings of the Korea Database Society Conference
    • /
    • 1999.06a
    • /
    • pp.175-186
    • /
    • 1999
  • Detecting the features of significant patterns from their own historical data is so much crucial to good performance specially in time-series forecasting. Recently, a new data filtering method (or multi-scale decomposition) such as wavelet analysis is considered more useful for handling the time-series that contain strong quasi-cyclical components than other methods. The reason is that wavelet analysis theoretically makes much better local information according to different time intervals from the filtered data. Wavelets can process information effectively at different scales. This implies inherent support fer multiresolution analysis, which correlates with time series that exhibit self-similar behavior across different time scales. The specific local properties of wavelets can for example be particularly useful to describe signals with sharp spiky, discontinuous or fractal structure in financial markets based on chaos theory and also allows the removal of noise-dependent high frequencies, while conserving the signal bearing high frequency terms of the signal. To date, the existing studies related to wavelet analysis are increasingly being applied to many different fields. In this study, we focus on several wavelet thresholding criteria or techniques to support multi-signal decomposition methods for financial time series forecasting and apply to forecast Korean Won / U.S. Dollar currency market as a case study. One of the most important problems that has to be solved with the application of the filtering is the correct choice of the filter types and the filter parameters. If the threshold is too small or too large then the wavelet shrinkage estimator will tend to overfit or underfit the data. It is often selected arbitrarily or by adopting a certain theoretical or statistical criteria. Recently, new and versatile techniques have been introduced related to that problem. Our study is to analyze thresholding or filtering methods based on wavelet analysis that use multi-signal decomposition algorithms within the neural network architectures specially in complex financial markets. Secondly, through the comparison with different filtering techniques' results we introduce the present different filtering criteria of wavelet analysis to support the neural network learning optimization and analyze the critical issues related to the optimal filter design problems in wavelet analysis. That is, those issues include finding the optimal filter parameter to extract significant input features for the forecasting model. Finally, from existing theory or experimental viewpoint concerning the criteria of wavelets thresholding parameters we propose the design of the optimal wavelet for representing a given signal useful in forecasting models, specially a well known neural network models.

  • PDF

Wavelet Thresholding Techniques to Support Multi-Scale Decomposition for Financial Forecasting Systems

  • Shin, Taek-Soo;Han, In-Goo
    • Proceedings of the Korea Inteligent Information System Society Conference
    • /
    • 1999.03a
    • /
    • pp.175-186
    • /
    • 1999
  • Detecting the features of significant patterns from their own historical data is so much crucial to good performance specially in time-series forecasting. Recently, a new data filtering method (or multi-scale decomposition) such as wavelet analysis is considered more useful for handling the time-series that contain strong quasi-cyclical components than other methods. The reason is that wavelet analysis theoretically makes much better local information according to different time intervals from the filtered data. Wavelets can process information effectively at different scales. This implies inherent support for multiresolution analysis, which correlates with time series that exhibit self-similar behavior across different time scales. The specific local properties of wavelets can for example be particularly useful to describe signals with sharp spiky, discontinuous or fractal structure in financial markets based on chaos theory and also allows the removal of noise-dependent high frequencies, while conserving the signal bearing high frequency terms of the signal. To data, the existing studies related to wavelet analysis are increasingly being applied to many different fields. In this study, we focus on several wavelet thresholding criteria or techniques to support multi-signal decomposition methods for financial time series forecasting and apply to forecast Korean Won / U.S. Dollar currency market as a case study. One of the most important problems that has to be solved with the application of the filtering is the correct choice of the filter types and the filter parameters. If the threshold is too small or too large then the wavelet shrinkage estimator will tend to overfit or underfit the data. It is often selected arbitrarily or by adopting a certain theoretical or statistical criteria. Recently, new and versatile techniques have been introduced related to that problem. Our study is to analyze thresholding or filtering methods based on wavelet analysis that use multi-signal decomposition algorithms within the neural network architectures specially in complex financial markets. Secondly, through the comparison with different filtering techniques results we introduce the present different filtering criteria of wavelet analysis to support the neural network learning optimization and analyze the critical issues related to the optimal filter design problems in wavelet analysis. That is, those issues include finding the optimal filter parameter to extract significant input features for the forecasting model. Finally, from existing theory or experimental viewpoint concerning the criteria of wavelets thresholding parameters we propose the design of the optimal wavelet for representing a given signal useful in forecasting models, specially a well known neural network models.

  • PDF