• Title/Summary/Keyword: input matrix

Search Result 971, Processing Time 0.027 seconds

Developing an Evacuation Evaluation Model for Offshore Oil and Gas Platforms Using BIM and Agent-based Model

  • Tan, Yi;Song, Yongze;Gan, Vincent J.L.;Mei, Zhongya;Wang, Xiangyu;Cheng, Jack C.P.
    • International conference on construction engineering and project management
    • /
    • 2017.10a
    • /
    • pp.32-41
    • /
    • 2017
  • Accidents on offshore oil and gas platforms (OOGPs) usually cause serious fatalities and financial losses considering demanding environment platforms locate and complex topsides structure platforms own. Evacuation planning on platforms is usually challenging. The computational tool is a good choice to plan evacuation by emergency simulation. However, the complex structure of platforms and varied evacuation behaviors usually weaken the advantages of computational simulation. Therefore, this study developed a simulation model for OOGPs to evaluate different evacuation plans to improve evacuation performance by integrating building information modeling (BIM) and agent-based model (ABM). The developed model consists of four parts: evacuation model input, simulation environment modeling, agent definition, and simulation and comparison. Necessary platform information is extracted from BIM and then used to model simulation environment by integrating matrix model and network model. During agent definition, in addition to basic characteristics, environment sensing and dynamic escape path planning functions are also developed to improve simulation performance. An example OOGP BIM topsides with different emergent scenarios is used to illustrate the developed model. The results showed that the developed model can well simulate evacuation on OOGPs and improve evacuation performance. The developed model was also suggested to be applied to other industries such as the architecture, engineering, and construction industry.

  • PDF

A new multi-stage SPSO algorithm for vibration-based structural damage detection

  • Sanjideh, Bahador Adel;Hamzehkolaei, Azadeh Ghadimi;Hosseinzadeh, Ali Zare;Amiri, Gholamreza Ghodrati
    • Structural Engineering and Mechanics
    • /
    • v.84 no.4
    • /
    • pp.489-502
    • /
    • 2022
  • This paper is aimed at developing an optimization-based Finite Element model updating approach for structural damage identification and quantification. A modal flexibility-based error function is introduced, which uses modal assurance criterion to formulate the updating problem as an optimization problem. Because of the inexplicit input/output relationship between the candidate solutions and the error function's output, a robust and efficient optimization algorithm should be employed to evaluate the solution domain and find the global extremum with high speed and accuracy. This paper proposes a new multi-stage Selective Particle Swarm Optimization (SPSO) algorithm to solve the optimization problem. The proposed multi-stage strategy not only fixes the premature convergence of the original Particle Swarm Optimization (PSO) algorithm, but also increases the speed of the search stage and reduces the corresponding computational costs, without changing or adding extra terms to the algorithm's formulation. Solving the introduced objective function with the proposed multi-stage SPSO leads to a smart feedback-wise and self-adjusting damage detection method, which can effectively assess the health of the structural systems. The performance and precision of the proposed method are verified and benchmarked against the original PSO and some of its most popular variants, including SPSO, DPSO, APSO, and MSPSO. For this purpose, two numerical examples of complex civil engineering structures under different damage patterns are studied. Comparative studies are also carried out to evaluate the performance of the proposed method in the presence of measurement errors. Moreover, the robustness and accuracy of the method are validated by assessing the health of a six-story shear-type building structure tested on a shake table. The obtained results introduced the proposed method as an effective and robust damage detection method even if the first few vibration modes are utilized to form the objective function.

Near-Optimal Low-Complexity Hybrid Precoding for THz Massive MIMO Systems

  • Yuke Sun;Aihua Zhang;Hao Yang;Di Tian;Haowen Xia
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.18 no.4
    • /
    • pp.1042-1058
    • /
    • 2024
  • Terahertz (THz) communication is becoming a key technology for future 6G wireless networks because of its ultra-wide band. However, the implementation of THz communication systems confronts formidable challenges, notably beam splitting effects and high computational complexity associated with them. Our primary objective is to design a hybrid precoder that minimizes the Euclidean distance from the fully digital precoder. The analog precoding part adopts the delay-phase alternating minimization (DP-AltMin) algorithm, which divides the analog precoder into phase shifters and time delayers. This effectively addresses the beam splitting effects within THz communication by incorporating time delays. The traditional digital precoding solution, however, needs matrix inversion in THz massive multiple-input multiple-output (MIMO) communication systems, resulting in significant computational complexity and complicating the design of the analog precoder. To address this issue, we exploit the characteristics of THz massive MIMO communication systems and construct the digital precoder as a product of scale factors and semi-unitary matrices. We utilize Schatten norm and Hölder's inequality to create semi-unitary matrices after initializing the scale factors depending on the power allocation. Finally, the analog precoder and digital precoder are alternately optimized to obtain the ultimate hybrid precoding scheme. Extensive numerical simulations have demonstrated that our proposed algorithm outperforms existing methods in mitigating the beam splitting issue, improving system performance, and exhibiting lower complexity. Furthermore, our approach exhibits a more favorable alignment with practical application requirements, underlying its practicality and efficiency.

EFFECT OF PROCESS VARIABLES ON FRICTION STIRRED MICROSTRUCTURE AND SURFACE HARDNESS OF AZ31 MAGNESIUM ALLOY

  • JAE-YEON KIM;JUNG-WOO HWANG;SEUNG-MI LEE;CHANG-YOUNG HYUN;IK-KEUN PARK;JAI-WON BYEON
    • Archives of Metallurgy and Materials
    • /
    • v.64 no.3
    • /
    • pp.907-911
    • /
    • 2019
  • Effects of various friction stir processing (FSP) variables on the microstructural evolution and microhardness of the AZ31 magnesium alloy were investigated. The processing variables include rotational and travelling speed of the tool, kind of second phase (i.e., diamond, Al2O3, and ZrO2) and groove depth (i.e., volume fraction of second phase). Grain size, distribution of second phase particle, grain texture, and microhardness were analyzed as a function of the FSP process variables. The FSPed AZ31 composites fabricated with a high heat input condition showed the better dispersion of particle without macro defect. For all composite specimens, the grain size decreased and the microhardness increased regardless of the grooved depth compared with that of the FSPed AZ31 without strengthening particle, respectively. For the AZ31/diamond composite having a grain size of about 1 ㎛, microhardness (i.e., about 108 Hv) was about two times higher than that of the matrix alloy (i.e., about 52 Hv). The effect of second phase particle on retardation of grain growth and resulting hardness increase was discussed.

Robo-Advisor Algorithm with Intelligent View Model (지능형 전망모형을 결합한 로보어드바이저 알고리즘)

  • Kim, Sunwoong
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.2
    • /
    • pp.39-55
    • /
    • 2019
  • Recently banks and large financial institutions have introduced lots of Robo-Advisor products. Robo-Advisor is a Robot to produce the optimal asset allocation portfolio for investors by using the financial engineering algorithms without any human intervention. Since the first introduction in Wall Street in 2008, the market size has grown to 60 billion dollars and is expected to expand to 2,000 billion dollars by 2020. Since Robo-Advisor algorithms suggest asset allocation output to investors, mathematical or statistical asset allocation strategies are applied. Mean variance optimization model developed by Markowitz is the typical asset allocation model. The model is a simple but quite intuitive portfolio strategy. For example, assets are allocated in order to minimize the risk on the portfolio while maximizing the expected return on the portfolio using optimization techniques. Despite its theoretical background, both academics and practitioners find that the standard mean variance optimization portfolio is very sensitive to the expected returns calculated by past price data. Corner solutions are often found to be allocated only to a few assets. The Black-Litterman Optimization model overcomes these problems by choosing a neutral Capital Asset Pricing Model equilibrium point. Implied equilibrium returns of each asset are derived from equilibrium market portfolio through reverse optimization. The Black-Litterman model uses a Bayesian approach to combine the subjective views on the price forecast of one or more assets with implied equilibrium returns, resulting a new estimates of risk and expected returns. These new estimates can produce optimal portfolio by the well-known Markowitz mean-variance optimization algorithm. If the investor does not have any views on his asset classes, the Black-Litterman optimization model produce the same portfolio as the market portfolio. What if the subjective views are incorrect? A survey on reports of stocks performance recommended by securities analysts show very poor results. Therefore the incorrect views combined with implied equilibrium returns may produce very poor portfolio output to the Black-Litterman model users. This paper suggests an objective investor views model based on Support Vector Machines(SVM), which have showed good performance results in stock price forecasting. SVM is a discriminative classifier defined by a separating hyper plane. The linear, radial basis and polynomial kernel functions are used to learn the hyper planes. Input variables for the SVM are returns, standard deviations, Stochastics %K and price parity degree for each asset class. SVM output returns expected stock price movements and their probabilities, which are used as input variables in the intelligent views model. The stock price movements are categorized by three phases; down, neutral and up. The expected stock returns make P matrix and their probability results are used in Q matrix. Implied equilibrium returns vector is combined with the intelligent views matrix, resulting the Black-Litterman optimal portfolio. For comparisons, Markowitz mean-variance optimization model and risk parity model are used. The value weighted market portfolio and equal weighted market portfolio are used as benchmark indexes. We collect the 8 KOSPI 200 sector indexes from January 2008 to December 2018 including 132 monthly index values. Training period is from 2008 to 2015 and testing period is from 2016 to 2018. Our suggested intelligent view model combined with implied equilibrium returns produced the optimal Black-Litterman portfolio. The out of sample period portfolio showed better performance compared with the well-known Markowitz mean-variance optimization portfolio, risk parity portfolio and market portfolio. The total return from 3 year-period Black-Litterman portfolio records 6.4%, which is the highest value. The maximum draw down is -20.8%, which is also the lowest value. Sharpe Ratio shows the highest value, 0.17. It measures the return to risk ratio. Overall, our suggested view model shows the possibility of replacing subjective analysts's views with objective view model for practitioners to apply the Robo-Advisor asset allocation algorithms in the real trading fields.

Prediction of Key Variables Affecting NBA Playoffs Advancement: Focusing on 3 Points and Turnover Features (미국 프로농구(NBA)의 플레이오프 진출에 영향을 미치는 주요 변수 예측: 3점과 턴오버 속성을 중심으로)

  • An, Sehwan;Kim, Youngmin
    • Journal of Intelligence and Information Systems
    • /
    • v.28 no.1
    • /
    • pp.263-286
    • /
    • 2022
  • This study acquires NBA statistical information for a total of 32 years from 1990 to 2022 using web crawling, observes variables of interest through exploratory data analysis, and generates related derived variables. Unused variables were removed through a purification process on the input data, and correlation analysis, t-test, and ANOVA were performed on the remaining variables. For the variable of interest, the difference in the mean between the groups that advanced to the playoffs and did not advance to the playoffs was tested, and then to compensate for this, the average difference between the three groups (higher/middle/lower) based on ranking was reconfirmed. Of the input data, only this year's season data was used as a test set, and 5-fold cross-validation was performed by dividing the training set and the validation set for model training. The overfitting problem was solved by comparing the cross-validation result and the final analysis result using the test set to confirm that there was no difference in the performance matrix. Because the quality level of the raw data is high and the statistical assumptions are satisfied, most of the models showed good results despite the small data set. This study not only predicts NBA game results or classifies whether or not to advance to the playoffs using machine learning, but also examines whether the variables of interest are included in the major variables with high importance by understanding the importance of input attribute. Through the visualization of SHAP value, it was possible to overcome the limitation that could not be interpreted only with the result of feature importance, and to compensate for the lack of consistency in the importance calculation in the process of entering/removing variables. It was found that a number of variables related to three points and errors classified as subjects of interest in this study were included in the major variables affecting advancing to the playoffs in the NBA. Although this study is similar in that it includes topics such as match results, playoffs, and championship predictions, which have been dealt with in the existing sports data analysis field, and comparatively analyzed several machine learning models for analysis, there is a difference in that the interest features are set in advance and statistically verified, so that it is compared with the machine learning analysis result. Also, it was differentiated from existing studies by presenting explanatory visualization results using SHAP, one of the XAI models.

Embedded Multi-LED Display System based on Wireless Internet using Otsu Algorithm (오츠 알고리즘을 활용한 무선인터넷 기반 임베디드 다중 LED 전광판 시스템)

  • Jang, Ho-Min;Kim, Eui-Ryong;Oh, Se-Chun;Kim, Sin-Ryeong;Kim, Young-Gon
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.16 no.6
    • /
    • pp.329-336
    • /
    • 2016
  • In the outdoor advertising and industrial sites, are trying to implement the LED electric bulletin board system that is based on image processing in order to express a variety of intention in real time. Recently, in various field, rather than simple text representation, the importance of intuitive communication using images is increasing. Thus, instead of outputting the simple input information for communication, a system that can output a real-time information being sought. Therefore, the system is directed to overcoming by converting the problem of mapping an image on a variety of conventional LED display that can not be output images, the possible image output formats. Using an LED of low power, it has developed to output the efficient messages and images within a limited resources. This paper provides a system capable of managing the LED display on the wireless network. Atmega2560, Wi-Fi module, using the server and Android applications client, rather than printing a text only, it is a system to reduce the load generated image output character output in to the conversion process as can be managed by the server.

Evaluation of Thermal Degradation of CFRP Flexural Strength at Elevated Temperature (온도 상승에 따른 탄소 복합재의 굽힘 강도 저하 평가)

  • Hwang Tae-Kyung;Park Jae-Beom;Lee Sang-Yun;Kim Hyung-Geun;Park Byung-Yeol;Doh Young-Dae
    • Composites Research
    • /
    • v.18 no.2
    • /
    • pp.20-29
    • /
    • 2005
  • To evaluate the flexural deformation and strength of composite motor case above the glass transition temperature$(T_g),\;170^{\circ}C$, of resin material, a finite element analysis(FEA) model in which material non-linearity and progressive failure mode were considered was proposed. The laminated flexural specimens which have the same lay-up and thickness as the composite motor case were tested by 4-point bending test to verify the validity of FEA model. Also. mechanical properties in high temperature were evaluated to obtain the input values for FEA. Because the material properties related to resin material were highly deteriorated in the temperature range beyond $T_g$, the flexural stiffness and strength of laminated flexural specimen in $200^{\circ}C$ were degraded by also $70\%\;and\;80\%$ in comparison with normal temperature results. Above $T_g$, the failure mode was changed from progressive failure mode initiated by matrix cracking at $90^{\circ}$ ply in bottom side and terminated by delamination at the center line of specimen to fiber compressive breakage mode at top side. From stress analysis, the progressive failure mechanism was well verified and the predicted bending stiffness and strength showed a good agreement with the test results.

Connection between Fourier of Signal Processing and Shannon of 5G SmartPhone (5G 스마트폰의 샤논과 신호처리의 푸리에의 표본화에서 만남)

  • Kim, Jeong-Su;Lee, Moon-Ho
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.17 no.6
    • /
    • pp.69-78
    • /
    • 2017
  • Shannon of the 5G smartphone and Fourier of the signal processing meet in the sampling theorem (2 times the highest frequency 1). In this paper, the initial Shannon Theorem finds the Shannon capacity at the point-to-point, but the 5G shows on the Relay channel that the technology has evolved into Multi Point MIMO. Fourier transforms are signal processing with fixed parameters. We analyzed the performance by proposing a 2N-1 multivariate Fourier-Jacket transform in the multimedia age. In this study, the authors tackle this signal processing complexity issue by proposing a Jacket-based fast method for reducing the precoding/decoding complexity in terms of time computation. Jacket transforms have shown to find applications in signal processing and coding theory. Jacket transforms are defined to be $n{\times}n$ matrices $A=(a_{jk})$ over a field F with the property $AA^{\dot{+}}=nl_n$, where $A^{\dot{+}}$ is the transpose matrix of the element-wise inverse of A, that is, $A^{\dot{+}}=(a^{-1}_{kj})$, which generalise Hadamard transforms and centre weighted Hadamard transforms. In particular, exploiting the Jacket transform properties, the authors propose a new eigenvalue decomposition (EVD) method with application in precoding and decoding of distributive multi-input multi-output channels in relay-based DF cooperative wireless networks in which the transmission is based on using single-symbol decodable space-time block codes. The authors show that the proposed Jacket-based method of EVD has significant reduction in its computational time as compared to the conventional-based EVD method. Performance in terms of computational time reduction is evaluated quantitatively through mathematical analysis and numerical results.

Design of User Clustering and Robust Beam in 5G MIMO-NOMA System Multicell (5G MIMO-NOMA 시스템 멀티 셀에서의 사용자 클러스터링 및 강력한 빔 설계)

  • Kim, Jeong-Su;Lee, Moon-Ho
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.18 no.1
    • /
    • pp.59-69
    • /
    • 2018
  • In this paper, we present a robust beamforming design to tackle the weighted sum-rate maximization (WSRM) problem in a multicell multiple-input multiple-output (MIMO) - non-orthogonal multipleaccess (NOMA) downlink system for 5G wireless communications. This work consider the imperfectchannel state information (CSI) at the base station (BS) by adding uncertainties to channel estimation matrices as the worst-case model i.e., singular value uncertainty model (SVUM). With this observation, the WSRM problem is formulated subject to the transmit power constraints at the BS. The objective problem is known as on-deterministic polynomial (NP) problem which is difficult to solve. We propose an robust beam forming design which establishes on majorization minimization (MM) technique to find the optimal transmit beam forming matrix, as well as efficiently solve the objective problem. In addition, we also propose a joint user clustering and power allocation (JUCPA) algorithm in which the best user pair is selected as a cluster to attain a higher sum-rate. Extensive numerical results are provided to show that the proposed robust beamforming design together with the proposed JUCPA algorithm significantly increases the performance in term of sum-rate as compared with the existing NOMA schemes and the conventional orthogonal multiple access (OMA) scheme.