• Title/Summary/Keyword: combined systems

Search Result 2,640, Processing Time 0.029 seconds

Monitoring moisture content of timber structures using PZT-enabled sensing and machine learning

  • Chen, Lin;Xiong, Haibei;He, Yufeng;Li, Xiuquan;Kong, Qingzhao
    • Smart Structures and Systems
    • /
    • v.29 no.4
    • /
    • pp.589-598
    • /
    • 2022
  • Timber structures are susceptible to structural damages caused by variations in moisture content (MC), inducing severe durability deterioration and safety issues. Therefore, it is of great significance to detect MC levels in timber structures. Compared to current methods for timber MC detection, which are time-consuming and require bulky equipment deployment, Lead Zirconate Titanate (PZT)-enabled stress wave sensing combined with statistic machine learning classification proposed in this paper show the advantage of the portable device and ease of operation. First, stress wave signals from different MC cases are excited and received by PZT sensors through active sensing. Subsequently, two non-baseline features are extracted from these stress wave signals. Finally, these features are fed to a statistic machine learning classifier (i.e., naïve Bayesian classification) to achieve MC detection of timber structures. Numerical simulations validate the feasibility of PZT-enabled sensing to perceive MC variations. Tests referring to five MC cases are conducted to verify the effectiveness of the proposed method. Results present high accuracy for timber MC detection, showing a great potential to conduct rapid and long-term monitoring of the MC level of timber structures in future field applications.

Developing efficient model updating approaches for different structural complexity - an ensemble learning and uncertainty quantifications

  • Lin, Guangwei;Zhang, Yi;Liao, Qinzhuo
    • Smart Structures and Systems
    • /
    • v.29 no.2
    • /
    • pp.321-336
    • /
    • 2022
  • Model uncertainty is a key factor that could influence the accuracy and reliability of numerical model-based analysis. It is necessary to acquire an appropriate updating approach which could search and determine the realistic model parameter values from measurements. In this paper, the Bayesian model updating theory combined with the transitional Markov chain Monte Carlo (TMCMC) method and K-means cluster analysis is utilized in the updating of the structural model parameters. Kriging and polynomial chaos expansion (PCE) are employed to generate surrogate models to reduce the computational burden in TMCMC. The selected updating approaches are applied to three structural examples with different complexity, including a two-storey frame, a ten-storey frame, and the national stadium model. These models stand for the low-dimensional linear model, the high-dimensional linear model, and the nonlinear model, respectively. The performances of updating in these three models are assessed in terms of the prediction uncertainty, numerical efforts, and prior information. This study also investigates the updating scenarios using the analytical approach and surrogate models. The uncertainty quantification in the Bayesian approach is further discussed to verify the validity and accuracy of the surrogate models. Finally, the advantages and limitations of the surrogate model-based updating approaches are discussed for different structural complexity. The possibility of utilizing the boosting algorithm as an ensemble learning method for improving the surrogate models is also presented.

Structural live load surveys by deep learning

  • Li, Yang;Chen, Jun
    • Smart Structures and Systems
    • /
    • v.30 no.2
    • /
    • pp.145-157
    • /
    • 2022
  • The design of safe and economical structures depends on the reliable live load from load survey. Live load surveys are traditionally conducted by randomly selecting rooms and weighing each item on-site, a method that has problems of low efficiency, high cost, and long cycle time. This paper proposes a deep learning-based method combined with Internet big data to perform live load surveys. The proposed survey method utilizes multi-source heterogeneous data, such as images, voice, and product identification, to obtain the live load without weighing each item through object detection, web crawler, and speech recognition. The indoor objects and face detection models are first developed based on fine-tuning the YOLOv3 algorithm to detect target objects and obtain the number of people in a room, respectively. Each detection model is evaluated using the independent testing set. Then web crawler frameworks with keyword and image retrieval are established to extract the weight information of detected objects from Internet big data. The live load in a room is derived by combining the weight and number of items and people. To verify the feasibility of the proposed survey method, a live load survey is carried out for a meeting room. The results show that, compared with the traditional method of sampling and weighing, the proposed method could perform efficient and convenient live load surveys and represents a new load research paradigm.

Filter Contribution Recycle: Boosting Model Pruning with Small Norm Filters

  • Chen, Zehong;Xie, Zhonghua;Wang, Zhen;Xu, Tao;Zhang, Zhengrui
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.16 no.11
    • /
    • pp.3507-3522
    • /
    • 2022
  • Model pruning methods have attracted huge attention owing to the increasing demand of deploying models on low-resource devices recently. Most existing methods use the weight norm of filters to represent their importance, and discard the ones with small value directly to achieve the pruning target, which ignores the contribution of the small norm filters. This is not only results in filter contribution waste, but also gives comparable performance to training with the random initialized weights [1]. In this paper, we point out that the small norm filters can harm the performance of the pruned model greatly, if they are discarded directly. Therefore, we propose a novel filter contribution recycle (FCR) method for structured model pruning to resolve the fore-mentioned problem. FCR collects and reassembles contribution from the small norm filters to obtain a mixed contribution collector, and then assigns the reassembled contribution to other filters with higher probability to be preserved. To achieve the target FLOPs, FCR also adopts a weight decay strategy for the small norm filters. To explore the effectiveness of our approach, extensive experiments are conducted on ImageNet2012 and CIFAR-10 datasets, and superior results are reported when comparing with other methods under the same or even more FLOPs reduction. In addition, our method is flexible to be combined with other different pruning criterions.

Decision Support System for Mongolian Portfolio Selection

  • Bukhsuren, Enkhtuul;Sambuu, Uyanga;Namsrai, Oyun-Erdene;Namsrai, Batnasan;Ryu, Keun Ho
    • Journal of Information Processing Systems
    • /
    • v.18 no.5
    • /
    • pp.637-649
    • /
    • 2022
  • Investors aim to increase their profitability by investing in the stock market. An adroit strategy for minimizing related risk lies through diversifying portfolio operationalization. In this paper, we propose a six-step stocks portfolio selection model. This model is based on data mining clustering techniques that reflect the ensuing impact of the political, economic, legal, and corporate governance in Mongolia. As a dataset, we have selected stock exchange trading price, financial statements, and operational reports of top-20 highly capitalized stocks that were traded at the Mongolian Stock Exchange from 2013 to 2017. In order to cluster the stock returns and risks, we have used k-means clustering techniques. We have combined both k-means clustering with Markowitz's portfolio theory to create an optimal and efficient portfolio. We constructed an efficient frontier, creating 15 portfolios, and computed the weight of stocks in each portfolio. From these portfolio options, the investor is given a choice to choose any one option.

Trends in Hardware Acceleration Techniques for Fully Homomorphic Encryption Operations (완전동형암호 연산 가속 하드웨어 기술 동향)

  • Park, S.C.;Kim, H.W.;Oh, Y.R.;Na, J.C.
    • Electronics and Telecommunications Trends
    • /
    • v.36 no.6
    • /
    • pp.1-12
    • /
    • 2021
  • As the demand for big data and big data-based artificial intelligence (AI) technology increases, the need for privacy preservations for sensitive information contained in big data and for high-speed encryption-based AI computation systems also increases. Fully homomorphic encryption (FHE) is a representative encryption technology that preserves the privacy of sensitive data. Therefore, FHE technology is being actively investigated primarily because, with FHE, decryption of the encrypted data is not required in the entire data flow. Data can be stored, transmitted, combined, and processed in an encrypted state. Moreover, FHE is based on an NP-hard problem (Lattice problem) that cannot be broken, even by a quantum computer, because of its high computational complexity and difficulty. FHE boasts a high-security level and therefore is receiving considerable attention as next-generation encryption technology. However, despite being able to process computations on encrypted data, the slow computation speed due to the high computational complexity of FHE technology is an obstacle to practical use. To address this problem, hardware technology that accelerates FHE operations is receiving extensive research attention. This article examines research trends associated with developments in hardware technology focused on accelerating the operations of representative FHE schemes. In addition, the detailed structures of hardware that accelerate the FHE operation are described.

Community Detection using Closeness Similarity based on Common Neighbor Node Clustering Entropy

  • Jiang, Wanchang;Zhang, Xiaoxi;Zhu, Weihua
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.16 no.8
    • /
    • pp.2587-2605
    • /
    • 2022
  • In order to efficiently detect community structure in complex networks, community detection algorithms can be designed from the perspective of node similarity. However, the appropriate parameters should be chosen to achieve community division, furthermore, these existing algorithms based on the similarity of common neighbors have low discrimination between node pairs. To solve the above problems, a noval community detection algorithm using closeness similarity based on common neighbor node clustering entropy is proposed, shorted as CSCDA. Firstly, to improve detection accuracy, common neighbors and clustering coefficient are combined in the form of entropy, then a new closeness similarity measure is proposed. Through the designed similarity measure, the closeness similar node set of each node can be further accurately identified. Secondly, to reduce the randomness of the community detection result, based on the closeness similar node set, the node leadership is used to determine the most closeness similar first-order neighbor node for merging to create the initial communities. Thirdly, for the difficult problem of parameter selection in existing algorithms, the merging of two levels is used to iteratively detect the final communities with the idea of modularity optimization. Finally, experiments show that the normalized mutual information values are increased by an average of 8.06% and 5.94% on two scales of synthetic networks and real-world networks with real communities, and modularity is increased by an average of 0.80% on the real-world networks without real communities.

Use of unmanned aerial systems for communication and air mobility in Arctic region

  • Gennady V., Chechin;Valentin E., Kolesnichenko;Anton I., Selin
    • Advances in aircraft and spacecraft science
    • /
    • v.9 no.6
    • /
    • pp.525-536
    • /
    • 2022
  • The current state of telecommunications infrastructure in the Arctic does not allow providing a wide range of required services for people, businesses and other categories, which necessitates the use of non-traditional approaches to its organization. The paper proposes an innovative approach to building a combined communication network based on tethered high-altitude platform station (HAPS) located at an altitude of 1-7 km and connected via radio channels with terrestrial and satellite communication networks. Network configuration and composition of telecommunication equipment placed on HAPS and located on the terrestrial and satellite segment of the network was justified. The availability of modern equipment and the distributed structure of such an integrated network will allow, unlike existing networks (Iridium, Gonets, etc.), to organize personal mobile communications, data transmission and broadband Internet up to 100 Mbps access for mobile and fixed subscribers, rapid transmission of information from Internet of Things (IoT) sensors and unmanned aerial vehicles (UAV). A substantiation of the possibility of achieving high network capacity in various paths is presented: inter-platform radio links, subscriber radio links, HAPS feeder lines - terrestrial network gateway, HAPS radio links - satellite retransmitter (SR), etc. The economic efficiency of the proposed solution is assessed.

A Performance Comparison of Parallel Programming Models on Edge Devices (엣지 디바이스에서의 병렬 프로그래밍 모델 성능 비교 연구)

  • Dukyun Nam
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.18 no.4
    • /
    • pp.165-172
    • /
    • 2023
  • Heterogeneous computing is a technology that utilizes different types of processors to perform parallel processing. It maximizes task processing and energy efficiency by leveraging various computing resources such as CPUs, GPUs, and FPGAs. On the other hand, edge computing has developed with IoT and 5G technologies. It is a distributed computing that utilizes computing resources close to clients, thereby offloading the central server. It has evolved to intelligent edge computing combined with artificial intelligence. Intelligent edge computing enables total data processing, such as context awareness, prediction, control, and simple processing for the data collected on the edge. If heterogeneous computing can be successfully applied in the edge, it is expected to maximize job processing efficiency while minimizing dependence on the central server. In this paper, experiments were conducted to verify the feasibility of various parallel programming models on high-end and low-end edge devices by using benchmark applications. We analyzed the performance of five parallel programming models on the Raspberry Pi 4 and Jetson Orin Nano as low-end and high-end devices, respectively. In the experiment, OpenACC showed the best performance on the low-end edge device and OpenSYCL on the high-end device due to the stability and optimization of system libraries.

A Machine Learning-Driven Approach for Wildfire Detection Using Hybrid-Sentinel Data: A Case Study of the 2022 Uljin Wildfire, South Korea

  • Linh Nguyen Van;Min Ho Yeon;Jin Hyeong Lee;Gi Ha Lee
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2023.05a
    • /
    • pp.175-175
    • /
    • 2023
  • Detection and monitoring of wildfires are essential for limiting their harmful effects on ecosystems, human lives, and property. In this research, we propose a novel method running in the Google Earth Engine platform for identifying and characterizing burnt regions using a hybrid of Sentinel-1 (C-band synthetic aperture radar) and Sentinel-2 (multispectral photography) images. The 2022 Uljin wildfire, the severest event in South Korean history, is the primary area of our investigation. Given its documented success in remote sensing and land cover categorization applications, we select the Random Forest (RF) method as our primary classifier. Next, we evaluate the performance of our model using multiple accuracy measures, including overall accuracy (OA), Kappa coefficient, and area under the curve (AUC). The proposed method shows the accuracy and resilience of wildfire identification compared to traditional methods that depend on survey data. These results have significant implications for the development of efficient and dependable wildfire monitoring systems and add to our knowledge of how machine learning and remote sensing-based approaches may be combined to improve environmental monitoring and management applications.

  • PDF