• Title/Summary/Keyword: Computational Techniques

Search Result 1,298, Processing Time 0.027 seconds

Analysis on Lightweight Methods of On-Device AI Vision Model for Intelligent Edge Computing Devices (지능형 엣지 컴퓨팅 기기를 위한 온디바이스 AI 비전 모델의 경량화 방식 분석)

  • Hye-Hyeon Ju;Namhi Kang
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.24 no.1
    • /
    • pp.1-8
    • /
    • 2024
  • On-device AI technology, which can operate AI models at the edge devices to support real-time processing and privacy enhancement, is attracting attention. As intelligent IoT is applied to various industries, services utilizing the on-device AI technology are increasing significantly. However, general deep learning models require a lot of computational resources for inference and learning. Therefore, various lightweighting methods such as quantization and pruning have been suggested to operate deep learning models in embedded edge devices. Among the lightweighting methods, we analyze how to lightweight and apply deep learning models to edge computing devices, focusing on pruning technology in this paper. In particular, we utilize dynamic and static pruning techniques to evaluate the inference speed, accuracy, and memory usage of a lightweight AI vision model. The content analyzed in this paper can be used for intelligent video control systems or video security systems in autonomous vehicles, where real-time processing are highly required. In addition, it is expected that the content can be used more effectively in various IoT services and industries.

An Automatic Data Collection System for Human Pose using Edge Devices and Camera-Based Sensor Fusion (엣지 디바이스와 카메라 센서 퓨전을 활용한 사람 자세 데이터 자동 수집 시스템)

  • Young-Geun Kim;Seung-Hyeon Kim;Jung-Kon Kim;Won-Jung Kim
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.19 no.1
    • /
    • pp.189-196
    • /
    • 2024
  • Frequent false positives alarm from the Intelligent Selective Control System have raised significant concerns. These persistent issues have led to declines in operational efficiency and market credibility among agents. Developing a new model or replacing the existing one to mitigate false positives alarm entails substantial opportunity costs; hence, improving the quality of the training dataset is pragmatic. However, smaller organizations face challenges with inadequate capabilities in dataset collection and refinement. This paper proposes an automatic human pose data collection system centered around a human pose estimation model, utilizing camera-based sensor fusion techniques and edge devices. The system facilitates the direct collection and real-time processing of field data at the network periphery, distributing the computational load that typically centralizes. Additionally, by directly labeling field data, it aids in constructing new training datasets.

Thermal post-buckling measurement of the advanced nanocomposites reinforced concrete systems via both mathematical modeling and machine learning algorithm

  • Minggui Zhou;Gongxing Yan;Danping Hu;Haitham A. Mahmoud
    • Advances in nano research
    • /
    • v.16 no.6
    • /
    • pp.623-638
    • /
    • 2024
  • This study investigates the thermal post-buckling behavior of concrete eccentric annular sector plates reinforced with graphene oxide powders (GOPs). Employing the minimum total potential energy principle, the plates' stability and response under thermal loads are analyzed. The Haber-Schaim foundation model is utilized to account for the support conditions, while the transform differential quadrature method (TDQM) is applied to solve the governing differential equations efficiently. The integration of GOPs significantly enhances the mechanical properties and stability of the plates, making them suitable for advanced engineering applications. Numerical results demonstrate the critical thermal loads and post-buckling paths, providing valuable insights into the design and optimization of such reinforced structures. This study presents a machine learning algorithm designed to predict complex engineering phenomena using datasets derived from presented mathematical modeling. By leveraging advanced data analytics and machine learning techniques, the algorithm effectively captures and learns intricate patterns from the mathematical models, providing accurate and efficient predictions. The methodology involves generating comprehensive datasets from mathematical simulations, which are then used to train the machine learning model. The trained model is capable of predicting various engineering outcomes, such as stress, strain, and thermal responses, with high precision. This approach significantly reduces the computational time and resources required for traditional simulations, enabling rapid and reliable analysis. This comprehensive approach offers a robust framework for predicting the thermal post-buckling behavior of reinforced concrete plates, contributing to the development of resilient and efficient structural components in civil engineering.

On the elastic stability and free vibration responses of functionally graded porous beams resting on Winkler-Pasternak foundations via finite element computation

  • Zakaria Belabed;Abdelouahed Tounsi;Mohammed A. Al-Osta;Abdeldjebbar Tounsi;Hoang-Le Minh
    • Geomechanics and Engineering
    • /
    • v.36 no.2
    • /
    • pp.183-204
    • /
    • 2024
  • In current investigation, a novel beam finite element model is formulated to analyze the buckling and free vibration responses of functionally graded porous beams resting on Winkler-Pasternak elastic foundations. The novelty lies in the formulation of a simplified finite element model with only three degrees of freedom per node, integrating both C0 and C1 continuity requirements according to Lagrange and Hermite interpolations, respectively, in isoparametric coordinate while emphasizing the impact of z-coordinate-dependent porosity on vibration and buckling responses. The proposed model has been validated and demonstrating high accuracy when compared to previously published solutions. A detailed parametric examination is performed, highlighting the influence of porosity distribution, foundation parameters, slenderness ratio, and boundary conditions. Unlike existing numerical techniques, the proposed element achieves a high rate of convergence with reduced computational complexity. Additionally, the model's adaptability to various mechanical problems and structural geometries is showcased through the numerical evaluation of elastic foundations, with results in strong agreement with the theoretical formulation. In light of the findings, porosity significantly affects the mechanical integrity of FGP beams on elastic foundations, with the advanced beam element offering a stable, efficient model for future research and this in-depth investigation enriches porous structure simulations in a field with limited current research, necessitating additional exploration and investigation.

A Study on the Data Analysis of Fire Simulation in Underground Utility Tunnel for Digital Twin Application (디지털트윈 적용을 위한 지하공동구 화재 시뮬레이션의 데이터 분석 연구)

  • Jae-Ho Lee;Se-Hong Min
    • Journal of the Society of Disaster Information
    • /
    • v.20 no.1
    • /
    • pp.82-92
    • /
    • 2024
  • Purpose: The purpose of this study is to find a solution to the massive data construction that occurs when fire simulation data is linked to augmented reality and the resulting data overload problem. Method: An experiment was conducted to set the interval between appropriate input data to improve the reliability and computational complexity of Linear Interpolation, a data estimation technology. In addition, a validity verification was conducted to confirm whether Linear Interpolation well reflected the dynamic changes of fire. Result: As a result of application to the underground common area, which is the study target building, it showed high satisfaction in improving the reliability of Interpolation and the operation processing speed of simulation when data was input at intervals of 10 m. In addition, it was verified through evaluation using MAE and R-Squared that the estimation method of fire simulation data using the Interpolation technique had high explanatory power and reliability. Conclusion: This study solved the data overload problem caused by applying digital twin technology to fire simulation through Interpolation techniques, and confirmed that fire information prediction and visualization were of great help in real-time fire prevention.

Reed-Solomon Encoded Block Storage in Key-value Store-based Blockchain Systems (키값 저장소 기반 블록체인 시스템에서 리드 솔로몬 부호화된 블록 저장)

  • Seong-Hyeon Lee;Jinchun Choi;Myungcheol Lee
    • The Transactions of the Korea Information Processing Society
    • /
    • v.13 no.3
    • /
    • pp.102-110
    • /
    • 2024
  • Blockchain records all transactions issued by users, which are then replicated, stored, and shared by participants of the blockchain network. Therefore, the capacity of the ledger stored by participants continues to increase as the blockchain network operates. In order to address this issue, research is being conducted on methods that enhance storage efficiency while ensuring that valid values are stored in the ledger even in the presence of device failures or malicious participants. One direction of research is applying techniques such as Reed-Solomon encoding to the storage of blockchain ledgers. In this paper, we apply Reed-Solomon encoding to the key-value store used for ledger storage in an open-source blockchain, and measure the storage efficiency and increasing computational overhead. Experimental results confirm that storage efficiency increased by 86% while the increase in CPU operations required for encoding was only about 2.7%.

Seismic Data Processing Using BERT-Based Pretraining: Comparison of Shotgather Arrays (BERT 기반 사전학습을 이용한 탄성파 자료처리: 송신원 모음 배열 비교)

  • Youngjae Shin
    • Geophysics and Geophysical Exploration
    • /
    • v.27 no.3
    • /
    • pp.171-180
    • /
    • 2024
  • The processing of seismic data involves analyzing earthquake wave data to understand the internal structure and characteristics of the Earth, which requires high computational power. Recently, machine learning (ML) techniques have been introduced to address these challenges and have been utilized in various tasks such as noise reduction and velocity model construction. However, most studies have focused on specific seismic data processing tasks, limiting the full utilization of similar features and structures inherent in the datasets. In this study, we compared the efficacy of using receiver-wise time-series data ("receiver array") and synchronized receiver signals ("time array") from shotgathers for pretraining a Bidirectional Encoder Representations from Transformers (BERT) model. To this end, shotgather data generated from a synthetic model containing faults was used to perform noise reduction, velocity prediction, and fault detection tasks. In the task of random noise reduction, both the receiver and time arrays showed good performance. However, for tasks requiring the identification of spatial distributions, such as velocity estimation and fault detection, the results from the time array were superior.

Optimization Strategies for Federated Learning Using WASM on Device and Edge Cloud (WASM을 활용한 디바이스 및 엣지 클라우드 기반 Federated Learning의 최적화 방안)

  • Jong-Seok Choi
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.17 no.4
    • /
    • pp.213-220
    • /
    • 2024
  • This paper proposes an optimization strategy for performing Federated Learning between devices and edge clouds using WebAssembly (WASM). The proposed strategy aims to maximize efficiency by conducting partial training on devices and the remaining training on edge clouds. Specifically, it mathematically describes and evaluates methods to optimize data transfer between GPU memory segments and the overlapping of computational tasks to reduce overall training time and improve GPU utilization. Through various experimental scenarios, we confirmed that asynchronous data transfer and task overlap significantly reduce training time, enhance GPU utilization, and improve model accuracy. In scenarios where all optimization techniques were applied, training time was reduced by 47%, GPU utilization improved to 91.2%, and model accuracy increased to 89.5%. These results demonstrate that asynchronous data transfer and task overlap effectively reduce GPU idle time and alleviate bottlenecks. This study is expected to contribute to the performance optimization of Federated Learning systems in the future.

The Adaptive Personalization Method According to Users Purchasing Index : Application to Beverage Purchasing Predictions (고객별 구매빈도에 동적으로 적응하는 개인화 시스템 : 음료수 구매 예측에의 적용)

  • Park, Yoon-Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.17 no.4
    • /
    • pp.95-108
    • /
    • 2011
  • TThis is a study of the personalization method that intelligently adapts the level of clustering considering purchasing index of a customer. In the e-biz era, many companies gather customers' demographic and transactional information such as age, gender, purchasing date and product category. They use this information to predict customer's preferences or purchasing patterns so that they can provide more customized services to their customers. The previous Customer-Segmentation method provides customized services for each customer group. This method clusters a whole customer set into different groups based on their similarity and builds predictive models for the resulting groups. Thus, it can manage the number of predictive models and also provide more data for the customers who do not have enough data to build a good predictive model by using the data of other similar customers. However, this method often fails to provide highly personalized services to each customer, which is especially important to VIP customers. Furthermore, it clusters the customers who already have a considerable amount of data as well as the customers who only have small amount of data, which causes to increase computational cost unnecessarily without significant performance improvement. The other conventional method called 1-to-1 method provides more customized services than the Customer-Segmentation method for each individual customer since the predictive model are built using only the data for the individual customer. This method not only provides highly personalized services but also builds a relatively simple and less costly model that satisfies with each customer. However, the 1-to-1 method has a limitation that it does not produce a good predictive model when a customer has only a few numbers of data. In other words, if a customer has insufficient number of transactional data then the performance rate of this method deteriorate. In order to overcome the limitations of these two conventional methods, we suggested the new method called Intelligent Customer Segmentation method that provides adaptive personalized services according to the customer's purchasing index. The suggested method clusters customers according to their purchasing index, so that the prediction for the less purchasing customers are based on the data in more intensively clustered groups, and for the VIP customers, who already have a considerable amount of data, clustered to a much lesser extent or not clustered at all. The main idea of this method is that applying clustering technique when the number of transactional data of the target customer is less than the predefined criterion data size. In order to find this criterion number, we suggest the algorithm called sliding window correlation analysis in this study. The algorithm purposes to find the transactional data size that the performance of the 1-to-1 method is radically decreased due to the data sparity. After finding this criterion data size, we apply the conventional 1-to-1 method for the customers who have more data than the criterion and apply clustering technique who have less than this amount until they can use at least the predefined criterion amount of data for model building processes. We apply the two conventional methods and the newly suggested method to Neilsen's beverage purchasing data to predict the purchasing amounts of the customers and the purchasing categories. We use two data mining techniques (Support Vector Machine and Linear Regression) and two types of performance measures (MAE and RMSE) in order to predict two dependent variables as aforementioned. The results show that the suggested Intelligent Customer Segmentation method can outperform the conventional 1-to-1 method in many cases and produces the same level of performances compare with the Customer-Segmentation method spending much less computational cost.

Performance Prediction for an Adaptive Optics System Using Two Analysis Methods: Statistical Analysis and Computational Simulation (통계분석 및 전산모사 기법을 이용한 적응광학 시스템 성능 예측)

  • Han, Seok Gi;Joo, Ji Yong;Lee, Jun Ho;Park, Sang Yeong;Kim, Young Soo;Jung, Yong Suk;Jung, Do Hwan;Huh, Joon;Lee, Kihun
    • Korean Journal of Optics and Photonics
    • /
    • v.33 no.4
    • /
    • pp.167-176
    • /
    • 2022
  • Adaptive optics (AO) systems compensate for atmospheric disturbance, especially phase distortion, by introducing counter-wavefront deformation calculated from real-time wavefront sensing or prediction. Because AO system implementations are time-consuming and costly, it is highly desirable to estimate the system's performance during the development of the AO system or its parts. Among several techniques, we mostly apply statistical analysis, computational simulation, and optical-bench tests. Statistical analysis estimates performance based on the sum of performance variances due to all design parameters, but ignores any correlation between them. Computational simulation models every part of an adaptive optics system, including atmospheric disturbance and a closed loop between wavefront sensor and deformable mirror, as close as possible to reality, but there are still some differences between simulation models and reality. The optical-bench test implements an almost identical AO system on an optical bench, to confirm the predictions of the previous methods. We are currently developing an AO system for a 1.6-m ground telescope using a deformable mirror that was recently developed in South Korea. This paper reports the results of the statistical analysis and computer simulation for the system's design and confirmation. For the analysis, we apply the Strehl ratio as the performance criterion, and the median seeing conditions at the Bohyun observatory in Korea. The statistical analysis predicts a Strehl ratio of 0.31. The simulation method similarly reports a slightly larger value of 0.32. During the study, the simulation method exhibits run-to-run variation due to the random nature of atmospheric disturbance, which converges when the simulation time is longer than 0.9 seconds, i.e., approximately 240 times the critical time constant of the applied atmospheric disturbance.