• Title/Summary/Keyword: domain decomposition methods

Search Result 98, Processing Time 0.021 seconds

Comparison between wind load by wind tunnel test and in-site measurement of long-span spatial structure

  • Liu, Hui;Qu, Wei-Lian;Li, Qiu-Sheng
    • Wind and Structures
    • /
    • v.14 no.4
    • /
    • pp.301-319
    • /
    • 2011
  • The full-scale measurements are compared with the wind tunnel test results for the long-span roof latticed spatial structure of Shenzhen Citizen Center. A direct comparison of model testing results to full-scale measurements is always desirable, not only in validating the experimental data and methods but also in providing better understanding of the physics such as Reynolds numbers and scale effects. Since the quantity and location of full-scale measurements points are different from those of the wind tunnel tests taps, the weighted proper orthogonal decomposition technique is applied to the wind pressure data obtained from the wind tunnel tests to generate a time history of wind load vector, then loads acted on all the internal nodes are obtained by interpolation technique. The nodal mean wind pressure coefficients, root-mean-square of wind pressure coefficients and wind pressure power spectrum are also calculated. The time and frequency domain characteristics of full-scale measurements wind load are analyzed based on filtered data-acquisitions. In the analysis, special attention is paid to the distributions of the mean wind pressure coefficients of center part of Shenzhen Citizen Center long-span roof spatial latticed structure. Furthermore, a brief discussion about difference between the wind pressure power spectrum from the wind tunnel experiments and that from the full-scale in-site measurements is compared. The result is important fundament of wind-induced dynamic response of long-span spatial latticed structures.

Digital Image Watermarking Scheme in the Singular Vector Domain (특이 벡터 영역에서 디지털 영상 워터마킹 방법)

  • Lee, Juck Sik
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.16 no.4
    • /
    • pp.122-128
    • /
    • 2015
  • As multimedia information is spread over cyber networks, problems such as protection of legal rights and original proof of an information owner raise recently. Various image transformations of DCT, DFT and DWT have been used to embed a watermark as a token of ownership. Recently, SVD being used in the field of numerical analysis is additionally applied to the watermarking methods. A watermarking method is proposed in this paper using Gabor cosine and sine transform as well as SVD for embedding and extraction of watermarks for digital images. After delivering attacks such as noise addition, space transformation, filtering and compression on watermarked images, watermark extraction algorithm is performed using the proposed GCST-SVD method. Normalized correlation values are calculated to measure the similarity between embedded watermark and extracted one as the index of watermark performance. Also visual inspection for the extracted watermark images has been done. Watermark images are inserted into the lowest vertical ac frequency band. From the experimental results, the proposed watermarking method using the singular vectors of SVD shows large correlation values of 0.9 or more and visual features of an embedded watermark for various attacks.

Cut out effect on nonlinear post-buckling behavior of FG-CNTRC micro plate subjected to magnetic field via FSDT

  • Jamali, M.;Shojaee, T.;Mohammadi, B.;Kolahchi, R.
    • Advances in nano research
    • /
    • v.7 no.6
    • /
    • pp.405-417
    • /
    • 2019
  • This research is devoted to study post-buckling analysis of functionally graded carbon nanotubes reinforced composite (FG-CNTRC) micro plate with cut out subjected to magnetic field and resting on elastic medium. The basic formulation of plate is based on first order shear deformation theory (FSDT) and the material properties of FG-CNTRCs are presumed to be changed through the thickness direction, and are assumed based on rule of mixture; moreover, nonlocal Eringen's theory is applied to consider the size-dependent effect. It is considered that the system is embedded in elastic medium and subjected to longitudinal magnetic field. Energy approach, domain decomposition and Rayleigh-Ritz methods in conjunction with Newton-Raphson iterative technique are employed to trace the post-buckling paths of FG-CNTRC micro cut out plate. The influence of some important parameters such as small scale effect, cut out dimension, different types of FG distributions of CNTs, volume fraction of CNTs, aspect ratio of plate, magnitude of magnetic field, elastic medium and biaxial load on the post-buckling behavior of system are calculated. With respect to results, it is concluded that the aspect ratio and length of square cut out have negative effect on post-buckling response of micro composite plate. Furthermore, existence of CNTs in system causes improvement in the post-buckling behavior of plate and different distributions of CNTs in plate have diverse response. Meanwhile, nonlocal parameter and biaxial compression load on the plate has negative effect on post-buckling response. In addition, imposing magnetic field increases the post-buckling load of the microstructure.

Structural health monitoring of a cable-stayed bridge using wireless smart sensor technology: data analyses

  • Cho, Soojin;Jo, Hongki;Jang, Shinae;Park, Jongwoong;Jung, Hyung-Jo;Yun, Chung-Bang;Spencer, Billie F. Jr.;Seo, Ju-Won
    • Smart Structures and Systems
    • /
    • v.6 no.5_6
    • /
    • pp.461-480
    • /
    • 2010
  • This paper analyses the data collected from the $2^{nd}$ Jindo Bridge, a cable-stayed bridge in Korea that is a structural health monitoring (SHM) international test bed for advanced wireless smart sensors network (WSSN) technology. The SHM system consists of a total of 70 wireless smart sensor nodes deployed underneath of the deck, on the pylons, and on the cables to capture the vibration of the bridge excited by traffic and environmental loadings. Analysis of the data is performed in both the time and frequency domains. Modal properties of the bridge are identified using the frequency domain decomposition and the stochastic subspace identification methods based on the output-only measurements, and the results are compared with those obtained from a detailed finite element model. Tension forces for the 10 instrumented stay cables are also estimated from the ambient acceleration data and compared both with those from the initial design and with those obtained during two previous regular inspections. The results of the data analyses demonstrate that the WSSN-based SHM system performs effectively for this cable-stayed bridge, giving direct access to the physical status of the bridge.

Experimental validation of a multi-level damage localization technique with distributed computation

  • Yan, Guirong;Guo, Weijun;Dyke, Shirley J.;Hackmann, Gregory;Lu, Chenyang
    • Smart Structures and Systems
    • /
    • v.6 no.5_6
    • /
    • pp.561-578
    • /
    • 2010
  • This study proposes a multi-level damage localization strategy to achieve an effective damage detection system for civil infrastructure systems based on wireless sensors. The proposed system is designed for use of distributed computation in a wireless sensor network (WSN). Modal identification is achieved using the frequency-domain decomposition (FDD) method and the peak-picking technique. The ASH (angle-between-string-and-horizon) and AS (axial strain) flexibility-based methods are employed for identifying and localizing damage. Fundamentally, the multi-level damage localization strategy does not activate all of the sensor nodes in the network at once. Instead, relatively few sensors are used to perform coarse-grained damage localization; if damage is detected, only those sensors in the potentially damaged regions are incrementally added to the network to perform finer-grained damage localization. In this way, many nodes are able to remain asleep for part or all of the multi-level interrogations, and thus the total energy cost is reduced considerably. In addition, a novel distributed computing strategy is also proposed to reduce the energy consumed in a sensor node, which distributes modal identification and damage detection tasks across a WSN and only allows small amount of useful intermediate results to be transmitted wirelessly. Computations are first performed on each leaf node independently, and the aggregated information is transmitted to one cluster head in each cluster. A second stage of computations are performed on each cluster head, and the identified operational deflection shapes and natural frequencies are transmitted to the base station of the WSN. The damage indicators are extracted at the base station. The proposed strategy yields a WSN-based SHM system which can effectively and automatically identify and localize damage, and is efficient in energy usage. The proposed strategy is validated using two illustrative numerical simulations and experimental validation is performed using a cantilevered beam.

Selective Shuffling for Hiding Hangul Messages in Steganography (스테가노그래피에서 한글 메시지 은닉을 위한 선택적 셔플링)

  • Ji, Seon-su
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.15 no.3
    • /
    • pp.211-216
    • /
    • 2022
  • Steganography technology protects the existence of hidden information by embedding a secret message in a specific location on the cover medium. Security and resistance are strengthened by applying various hybrid methods based on encryption and steganography. In particular, techniques to increase chaos and randomness are needed to improve security. In fact, the case where the shuffling method is applied based on the discrete cosine transform(DCT) and the least significant bit(LSB) is an area that needs to be studied. I propose a new approach to hide the bit information of Hangul messages by integrating the selective shuffling method that can add the complexity of message hiding and applying the spatial domain technique to steganography. Inverse shuffling is applied when extracting messages. In this paper, the Hangul message to be inserted is decomposed into the choseong, jungseong and jongseong. It improves security and chaos by applying a selective shuffling process based on the corresponding information. The correlation coefficient and PSNR were used to confirm the performance of the proposed method. It was confirmed that the PSNR value of the proposed method was appropriate when compared with the reference value.

A Method for Prediction of Quality Defects in Manufacturing Using Natural Language Processing and Machine Learning (자연어 처리 및 기계학습을 활용한 제조업 현장의 품질 불량 예측 방법론)

  • Roh, Jeong-Min;Kim, Yongsung
    • Journal of Platform Technology
    • /
    • v.9 no.3
    • /
    • pp.52-62
    • /
    • 2021
  • Quality control is critical at manufacturing sites and is key to predicting the risk of quality defect before manufacturing. However, the reliability of manual quality control methods is affected by human and physical limitations because manufacturing processes vary across industries. These limitations become particularly obvious in domain areas with numerous manufacturing processes, such as the manufacture of major nuclear equipment. This study proposed a novel method for predicting the risk of quality defects by using natural language processing and machine learning. In this study, production data collected over 6 years at a factory that manufactures main equipment that is installed in nuclear power plants were used. In the preprocessing stage of text data, a mapping method was applied to the word dictionary so that domain knowledge could be appropriately reflected, and a hybrid algorithm, which combined n-gram, Term Frequency-Inverse Document Frequency, and Singular Value Decomposition, was constructed for sentence vectorization. Next, in the experiment to classify the risky processes resulting in poor quality, k-fold cross-validation was applied to categorize cases from Unigram to cumulative Trigram. Furthermore, for achieving objective experimental results, Naive Bayes and Support Vector Machine were used as classification algorithms and the maximum accuracy and F1-score of 0.7685 and 0.8641, respectively, were achieved. Thus, the proposed method is effective. The performance of the proposed method were compared and with votes of field engineers, and the results revealed that the proposed method outperformed field engineers. Thus, the method can be implemented for quality control at manufacturing sites.

Label Embedding for Improving Classification Accuracy UsingAutoEncoderwithSkip-Connections (다중 레이블 분류의 정확도 향상을 위한 스킵 연결 오토인코더 기반 레이블 임베딩 방법론)

  • Kim, Museong;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.3
    • /
    • pp.175-197
    • /
    • 2021
  • Recently, with the development of deep learning technology, research on unstructured data analysis is being actively conducted, and it is showing remarkable results in various fields such as classification, summary, and generation. Among various text analysis fields, text classification is the most widely used technology in academia and industry. Text classification includes binary class classification with one label among two classes, multi-class classification with one label among several classes, and multi-label classification with multiple labels among several classes. In particular, multi-label classification requires a different training method from binary class classification and multi-class classification because of the characteristic of having multiple labels. In addition, since the number of labels to be predicted increases as the number of labels and classes increases, there is a limitation in that performance improvement is difficult due to an increase in prediction difficulty. To overcome these limitations, (i) compressing the initially given high-dimensional label space into a low-dimensional latent label space, (ii) after performing training to predict the compressed label, (iii) restoring the predicted label to the high-dimensional original label space, research on label embedding is being actively conducted. Typical label embedding techniques include Principal Label Space Transformation (PLST), Multi-Label Classification via Boolean Matrix Decomposition (MLC-BMaD), and Bayesian Multi-Label Compressed Sensing (BML-CS). However, since these techniques consider only the linear relationship between labels or compress the labels by random transformation, it is difficult to understand the non-linear relationship between labels, so there is a limitation in that it is not possible to create a latent label space sufficiently containing the information of the original label. Recently, there have been increasing attempts to improve performance by applying deep learning technology to label embedding. Label embedding using an autoencoder, a deep learning model that is effective for data compression and restoration, is representative. However, the traditional autoencoder-based label embedding has a limitation in that a large amount of information loss occurs when compressing a high-dimensional label space having a myriad of classes into a low-dimensional latent label space. This can be found in the gradient loss problem that occurs in the backpropagation process of learning. To solve this problem, skip connection was devised, and by adding the input of the layer to the output to prevent gradient loss during backpropagation, efficient learning is possible even when the layer is deep. Skip connection is mainly used for image feature extraction in convolutional neural networks, but studies using skip connection in autoencoder or label embedding process are still lacking. Therefore, in this study, we propose an autoencoder-based label embedding methodology in which skip connections are added to each of the encoder and decoder to form a low-dimensional latent label space that reflects the information of the high-dimensional label space well. In addition, the proposed methodology was applied to actual paper keywords to derive the high-dimensional keyword label space and the low-dimensional latent label space. Using this, we conducted an experiment to predict the compressed keyword vector existing in the latent label space from the paper abstract and to evaluate the multi-label classification by restoring the predicted keyword vector back to the original label space. As a result, the accuracy, precision, recall, and F1 score used as performance indicators showed far superior performance in multi-label classification based on the proposed methodology compared to traditional multi-label classification methods. This can be seen that the low-dimensional latent label space derived through the proposed methodology well reflected the information of the high-dimensional label space, which ultimately led to the improvement of the performance of the multi-label classification itself. In addition, the utility of the proposed methodology was identified by comparing the performance of the proposed methodology according to the domain characteristics and the number of dimensions of the latent label space.