• Title/Summary/Keyword: data-based model

Search Result 21,105, Processing Time 0.047 seconds

An Inference Similarity-based Federated Learning Framework for Enhancing Collaborative Perception in Autonomous Driving

  • Zilong Jin;Chi Zhang;Lejun Zhang
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.18 no.5
    • /
    • pp.1223-1237
    • /
    • 2024
  • Autonomous vehicles use onboard sensors to sense the surrounding environment. In complex autonomous driving scenarios, the detection and recognition capabilities are constrained, which may result in serious accidents. An efficient way to enhance the detection and recognition capabilities is establishing collaborations with the neighbor vehicles. However, the collaborations introduce additional challenges in terms of the data heterogeneity, communication cost, and data privacy. In this paper, a novel personalized federated learning framework is proposed for addressing the challenges and enabling efficient collaborations in autonomous driving environment. For obtaining a global model, vehicles perform local training and transmit logits to a central unit instead of the entire model, and thus the communication cost is minimized, and the data privacy is protected. Then, the inference similarity is derived for capturing the characteristics of data heterogeneity. The vehicles are divided into clusters based on the inference similarity and a weighted aggregation is performed within a cluster. Finally, the vehicles download the corresponding aggregated global model and train a personalized model which is personalized for the cluster that has similar data distribution, so that accuracy is not affected by heterogeneous data. Experimental results demonstrate significant advantages of our proposed method in improving the efficiency of collaborative perception and reducing communication cost.

A Method for Generating a Plant Model Based on Log Data for Control Level Simulation (제어시뮬레이션을 위한 생산시스템 로그데이터 기반 플랜트 모델 생성 방법)

  • Ko, Minsuk;Cheon, Sang Uk;Park, Sang Chul
    • Korean Journal of Computational Design and Engineering
    • /
    • v.18 no.1
    • /
    • pp.21-27
    • /
    • 2013
  • Presented in the paper is a log data based modeling method for effective construction of a virtual plant model which can be used for the virtual PLC (Programmable Logic Controller) simulation. For the PLC simulation, the corresponding virtual plant, consisting of virtual devices, is required to interact with the input and output symbols of a PLC. In other words, the behavior of a virtual device should be the same as that of the real device. Conventionally, the DEVS (Discrete Event Systems Specifications) formalism has been used to represent the behavior a virtual device. The modeling using DEVS formalism, however, requires in-depth knowledge in the simulation area, as well as the significant amount of time and efforts. One of the key ideas of the proposed method is to generate a plant model based on the log data obtained from the production system. The proposed method is very intuitive, and it can be used to generate the full behavior model of a virtual device. The proposed approach was applied to an AGV (Automated Guided Vehicle).

FUZZY IDENTIFICATION BY MEANS OF AUTO-TUNING ALGORITHM AND WEIGHTING FACTOR

  • Park, Chun-Seong;Oh, Sung-Kwun;Ahn, Tae-Chon;Pedrycz, Witold
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 1998.06a
    • /
    • pp.701-706
    • /
    • 1998
  • A design method of rule -based fuzzy modeling is presented for the model identification of complex and nonlinear systems. The proposed rule-based fuzzy modeling implements system structure and parameter identification in the efficient form of " IF..., THEN,," statements. using the theories of optimization and linguistic fuzzy implication rules. The improved complex method, which is a powerful auto-tuning algorithm, is used for tuning of parameters of the premise membership functions in consideration of the overall structure of fuzzy rules. The optimized objective function, including the weighting factors, is auto-tuned for better performance of fuzzy model using training data and testing data. According to the adjustment of each weighting factor of training and testing data, we can construct the optimal fuzzy model from the objective function. The least square method is utilized for the identification of optimum consequence parameters. Gas furance and a sewage treatment proce s are used to evaluate the performance of the proposed rule-based fuzzy modeling.

  • PDF

A Study of Fine Tuning Pre-Trained Korean BERT for Question Answering Performance Development (사전 학습된 한국어 BERT의 전이학습을 통한 한국어 기계독해 성능개선에 관한 연구)

  • Lee, Chi Hoon;Lee, Yeon Ji;Lee, Dong Hee
    • Journal of Information Technology Services
    • /
    • v.19 no.5
    • /
    • pp.83-91
    • /
    • 2020
  • Language Models such as BERT has been an important factor of deep learning-based natural language processing. Pre-training the transformer-based language models would be computationally expensive since they are consist of deep and broad architecture and layers using an attention mechanism and also require huge amount of data to train. Hence, it became mandatory to do fine-tuning large pre-trained language models which are trained by Google or some companies can afford the resources and cost. There are various techniques for fine tuning the language models and this paper examines three techniques, which are data augmentation, tuning the hyper paramters and partly re-constructing the neural networks. For data augmentation, we use no-answer augmentation and back-translation method. Also, some useful combinations of hyper parameters are observed by conducting a number of experiments. Finally, we have GRU, LSTM networks to boost our model performance with adding those networks to BERT pre-trained model. We do fine-tuning the pre-trained korean-based language model through the methods mentioned above and push the F1 score from baseline up to 89.66. Moreover, some failure attempts give us important lessons and tell us the further direction in a good way.

A FRF-based algorithm for damage detection using experimentally collected data

  • Garcia-Palencia, Antonio;Santini-Bell, Erin;Gul, Mustafa;Catbas, Necati
    • Structural Monitoring and Maintenance
    • /
    • v.2 no.4
    • /
    • pp.399-418
    • /
    • 2015
  • Automated damage detection through Structural Health Monitoring (SHM) techniques has become an active area of research in the bridge engineering community but widespread implementation on in-service infrastructure still presents some challenges. In the meantime, visual inspection remains as the most common method for condition assessment even though collected information is highly subjective and certain types of damage can be overlooked by the inspector. In this article, a Frequency Response Functions-based model updating algorithm is evaluated using experimentally collected data from the University of Central Florida (UCF)-Benchmark Structure. A protocol for measurement selection and a regularization technique are presented in this work in order to provide the most well-conditioned model updating scenario for the target structure. The proposed technique is composed of two main stages. First, the initial finite element model (FEM) is calibrated through model updating so that it captures the dynamic signature of the UCF Benchmark Structure in its healthy condition. Second, based upon collected data from the damaged condition, the updating process is repeated on the baseline (healthy) FEM. The difference between the updated parameters from subsequent stages revealed both location and extent of damage in a "blind" scenario, without any previous information about type and location of damage.

Machine learning-based evaluation technology of 3D spatial distribution of residual radioactivity in large-scale radioactive structures

  • UkJae Lee;Phillip Chang;Nam-Suk Jung;Jonghun Jang;Jimin Lee;Hee-Seock Lee
    • Nuclear Engineering and Technology
    • /
    • v.56 no.8
    • /
    • pp.3199-3209
    • /
    • 2024
  • During the decommissioning of nuclear and particle accelerator facilities, a considerable amount of large-scale radioactive waste may be generated. Accurately defining the activation level of the waste is crucial for proper disposal. However, directly measuring the internal radioactivity distribution poses challenges. This study introduced a novel technology employing machine learning to assess the internal radioactivity distribution based on external measurements. Random radioactivity distribution within a structure were established, and the photon spectrum measured by detectors from outside the structure was simulated using the FLUKA Monte-Carlo code. Through training with spectrum data corresponding to various radioactivity distributions, an evaluation model for radioactivity using simulated data was developed by above Monte-Carlo simulation. Convolutional Neural Network and Transformer methods were utilized to establish the evaluation model. The machine learning construction involves 5425 simulation datasets, and 603 datasets, which were used to obtain the evaluated results. Preprocessing was applied to the datasets, but the evaluation model using raw spectrum data showed the best evaluation results. The estimation of the intensity and shape of the radioactivity distribution inside the structure was achieved with a relative error of 10%. Additionally, the evaluation based on the constructed model takes only a few seconds to complete the process.

A New Approach to Web Data Mining Based on Cloud Computing

  • Zhu, Wenzheng;Lee, Changhoon
    • Journal of Computing Science and Engineering
    • /
    • v.8 no.4
    • /
    • pp.181-186
    • /
    • 2014
  • Web data mining aims at discovering useful knowledge from various Web resources. There is a growing trend among companies, organizations, and individuals alike of gathering information through Web data mining to utilize that information in their best interest. In science, cloud computing is a synonym for distributed computing over a network; cloud computing relies on the sharing of resources to achieve coherence and economies of scale, similar to a utility over a network, and means the ability to run a program or application on many connected computers at the same time. In this paper, we propose a new system framework based on the Hadoop platform to realize the collection of useful information of Web resources. The system framework is based on the Map/Reduce programming model of cloud computing. We propose a new data mining algorithm to be used in this system framework. Finally, we prove the feasibility of this approach by simulation experiment.

Trotting Gait Generation Based on the Lizard Biometric Data (도마뱀 생체 데이터를 이용한 속보 걸음새 생성)

  • Kim, Chang Hoi;Shin, Ho Cheol;Lee, Heung Ho
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.62 no.10
    • /
    • pp.1436-1443
    • /
    • 2013
  • A variety of studies on imitating the skeletal structure and the gait of legged animals have been done in order to develop walking robots which have an ability to adapt to atypical environments. In this paper, we analyzed the gait of a Bearded dragon lizard using the motion capture system, proposed a calibration scheme of the motion data and generated the trotting gait of a lizard based on the calibrated data. Also, we constructed the dynamic model based on the biometric data of a Bearded dragon lizard and applied the trotting gait of the lizard to the dynamic model. We verified the validity of the gait with the commercial dynamic simulation software.

Bayesian Inference for Predicting the Default Rate Using the Power Prior

  • Kim, Seong-W.;Son, Young-Sook;Choi, Sang-A
    • Communications for Statistical Applications and Methods
    • /
    • v.13 no.3
    • /
    • pp.685-699
    • /
    • 2006
  • Commercial banks and other related areas have developed internal models to better quantify their financial risks. Since an appropriate credit risk model plays a very important role in the risk management at financial institutions, it needs more accurate model which forecasts the credit losses, and statistical inference on that model is required. In this paper, we propose a new method for estimating a default rate. It is a Bayesian approach using the power prior which allows for incorporating of historical data to estimate the default rate. Inference on current data could be more reliable if there exist similar data based on previous studies. Ibrahim and Chen (2000) utilize these data to characterize the power prior. It allows for incorporating of historical data to estimate the parameters in the models. We demonstrate our methodologies with a real data set regarding SOHO data and also perform a simulation study.

Applying Formal Methods to Modeling and Analysis of Real-time Data Streams

  • Kapitanova, Krasimira;Wei, Yuan;Kang, Woo-Chul;Son, Sang-H.
    • Journal of Computing Science and Engineering
    • /
    • v.5 no.1
    • /
    • pp.85-110
    • /
    • 2011
  • Achieving situation awareness is especially challenging for real-time data stream applications because they i) operate on continuous unbounded streams of data, and ii) have inherent realtime requirements. In this paper we showed how formal data stream modeling and analysis can be used to better understand stream behavior, evaluate query costs, and improve application performance. We used MEDAL, a formal specification language based on Petri nets, to model the data stream queries and the quality-of-service management mechanisms of RT-STREAM, a prototype system for data stream management. MEDAL's ability to combine query logic and data admission control in one model allows us to design a single comprehensive model of the system. This model can be used to perform a large set of analyses to help improve the application's performance and quality of service.