• Title/Summary/Keyword: nonlinear system analysis

Search Result 2,270, Processing Time 0.037 seconds

A Study on Fatigue Characteristic of Stent Using Finite Element Analysis (나이티놀 와이어 스텐트의 피로도 특성에 대한 유한요소 해석)

  • Kim, Han-Ki;Shin, Il-Gyun;Kim, Dong-Gon;Kim, Seong-Hyeon;Lee, Ju-Ho;Ki, Byoyng-Yun;Suh, Tae-Suk;Kim, Sang-Ho
    • Progress in Medical Physics
    • /
    • v.20 no.3
    • /
    • pp.119-124
    • /
    • 2009
  • Stents are frequently used throughout the human body. They keep pathways open in vascular or nonvascular duct for a long time. Therefore its stability is very important factor. In recent years, aconsiderable amount of research has been carried out in order to estimate mechanical properties of the stent such as expansion pressure behavior, radial recoil and longitudinal recoil using FEM (Finite element analyses). However, published works on simulation of stent fatigue behavior using FEM are relatively rare. In this paper, a nonlinear finite-element methodwas employed to analyses the compression of a stent using external pressure and fatigue behavior. Finite element analyses for the stent system were performed using NASTRAN FX. In conclusion this paper shows how the stent is behaved in the body, and its fatigue behavior.

  • PDF

Design of Summer Very Short-term Precipitation Forecasting Pattern in Metropolitan Area Using Optimized RBFNNs (최적화된 다항식 방사형 기저함수 신경회로망을 이용한 수도권 여름철 초단기 강수예측 패턴 설계)

  • Kim, Hyun-Ki;Choi, Woo-Yong;Oh, Sung-Kwun
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.23 no.6
    • /
    • pp.533-538
    • /
    • 2013
  • The damage caused by Recent frequently occurring locality torrential rains is increasing rapidly. In case of densely populated metropolitan area, casualties and property damage is a serious due to landslides and debris flows and floods. Therefore, the importance of predictions about the torrential is increasing. Precipitation characteristic of the bad weather in Korea is divided into typhoons and torrential rains. This seems to vary depending on the duration and area. Rainfall is difficult to predict because regional precipitation is large volatility and nonlinear. In this paper, Very short-term precipitation forecasting pattern model is implemented using KLAPS data used by Korea Meteorological Administration. we designed very short term precipitation forecasting pattern model using GA-based RBFNNs. the structural and parametric values such as the number of Inputs, polynomial type,number of fcm cluster, and fuzzification coefficient are optimized by GA optimization algorithm.

PERIOD CHANGE OF W UMa TYPE CONTACT BINARY AB And (W UMa형 접촉쌍성 AB And의 주기변화)

  • Jin, Ho;Han, Won-Yong;Kim, Chun-Hwey;Lee, Jae-Woo;Lee, Woo-Baik
    • Journal of Astronomy and Space Sciences
    • /
    • v.14 no.2
    • /
    • pp.242-250
    • /
    • 1997
  • The CCD photometric observations of W UMa-type eclipsing binary AB And were made from September 1994 to October 1996. New four primary minimum times were obtained from these observations. The analysis of times of minimum light for AB And confirms other previous studies that the orbital period of AB And have been changing as a form of sinusoidal variation. In this paper, we calculated the new orbital elements with linear and nonlinear quadratic term, and the best fit equation is derived with the assumption that the period variation of AB And changes sinusoidal pattern. From the sinusoidal term of this orbital element, we calculate period variation as 92 years with amplitude of $0.^{d}059$. However this result considering only sinusoidal term, was not satisfied with our recent observations. Thus, by assuming another parabolic period variation with the sinusoidal pattern, we derived the best fit orbital elements. From the quadratic coefficient of this orbital elements, we calculated the secular variation of 0.73 seconds, and from the sinusoidal term, the period variation turned out to be 62.9 years with amplitude of $0.^{d}024$. If we assume only the sinusoidal period variation of AB And, the period has to be decreased within 10 years. However if we consider quadratic term with the sinusoidal period variation of the light elements, the period is expected to be increased. Therefore long-term observations of this binary system are required to confirm this issue.

  • PDF

Dynamic Response and Control of Airship with Gust (외란이 작용하는 비행선의 동적 반응 및 제어)

  • Woo, G.A.;Park, I.H.;Oh, S.J.;Cho, K.R.
    • Journal of the Korean Society for Aeronautical & Space Sciences
    • /
    • v.30 no.6
    • /
    • pp.69-77
    • /
    • 2002
  • To acquire the dynamic response and design the controller of the airship, the longitudinal motion of the airship with respect to the vertical gust, which is the nonlinear system, was studied. The effects of the apparent mass and moment of the airship delay the dynamic response and the settling time, which are slower than those of conventional airplanes. The current object of the airship is designed to cruise at 500~1000m altitude. At that height, the atmospheric conditions are generally unstable by wind gust. In this paper, it has been studied for the case of vertical gust, since the apparent mass effects are dominant in has been studied for the case of vertical gust, since the apparent mass effects are dominant in that plane. In addition to the study of the dynamic responses of the airship, the controller was designed using the PID-controller. When the gust was applied, airship responses were recovered of equilibrium states. However, it takes too ling time for recovery and the speed of airship is reduced. So, the aim in this paper was to fasten the recovery speed and to get back the cruising velocity. The control parameters were determined from the stability mode analysis, and the control inputs were the thrust and the elevator deflection angle.

Investigating the Use of Energy Performance Indicators in Korean Industry Sector (한국 산업부문의 에너지성과 지표 이용에 관한 연구)

  • Shim, Hong-Souk;Lee, Sung-Joo
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.22 no.3
    • /
    • pp.707-725
    • /
    • 2021
  • Energy management systems (EnMS) contribute to sustainable energy saving and greenhouse gas reduction by emphasizing the role of energy management in production-oriented economies. Although understanding the methods used to measure energy performance is a key factor in constructing successful EnMS, few attempts have been made to examine these methods, their applicability, and their utility in practice. To fill this research gap, this study aimed to deepen the understanding of energy performance measures by focusing on four energy performance indicators (EnPIs) proposed by ISO 50006, namely the measured energy value, ratio between measured values, linear regression model, and nonlinear regression model. This paper presents policy and managerial implications to facilitate the effective use of these measures. An analytic hierarchy process (AHP) analysis was conducted with 41 experts to analyze the preference for EnPIs and their key selection criteria by the industry sector, and organization and user type. The findings suggest that the most preferred EnPI is the ratio between the measured values followed by the measured energy value. The ease of use was considered to be most important while choosing EnPIs.

Analysis of auditory temporal processing in within- and cross-channel gap detection thresholds for low-frequency pure tones (저주파수 순음에 대한 within- 및 cross-channel gap detectin thresholds를 이용한 auditory temporal processing 특성 연구)

  • Koo, Sungmin;Lim, Dukhwan
    • The Journal of the Acoustical Society of Korea
    • /
    • v.41 no.1
    • /
    • pp.58-63
    • /
    • 2022
  • This study was conducted to examine the characteristics of pitch perception and temporal resolution through Within-/Cross-Channel Gap Detection Thresholds (WC/CC GDTs) using low-frequency pure tones (such as 264 Hz, 373 Hz and 528 Hz related to C4, C4#, and C5 musical tones. 40 young people and 20 elderly people with normal hearing participated in this study. The results of WC GDTs were approximately 2 ms ~ 4 ms threshold values regardless of frequencies in two groups. There was no statistically significant difference in WC GDTs between groups. In both groups, CC GDTs were larger than WC GDTs, and as the frequency difference increased, the CC GDTs also increased. In particular, in the comparison between groups of CC GDTs, the results of the elderly group were 8 times ~ 10 times larger than that of the young group, and there was a statistically significant difference between the groups. These data also showed a different trend of GDTs in comparison with the previous data obtained from musical stimuli.This study suggests that GDTs may influence pitch perception mechanisms and can be used as psychoacoustic evidence for nonlinear responses of auditory nervous system.

Investigation of O4 Air Mass Factor Sensitivity to Aerosol Peak Height Using UV-VIS Hyperspectral Synthetic Radiance in Various Measurement Conditions (UV-VIS 초분광 위성센서 모의복사휘도를 활용한 다양한 관측환경에서의 에어로솔 유효고도에 대한 O4 대기질량인자 민감도 조사)

  • Choi, Wonei;Lee, Hanlim;Choi, Chuluong;Lee, Yangwon;Noh, Youngmin
    • Korean Journal of Remote Sensing
    • /
    • v.36 no.2_1
    • /
    • pp.155-165
    • /
    • 2020
  • In this present study, the sensitivity of O4 Air Mass Factor (AMF) to Aerosol Peak Height (APH) has been investigated using radiative transfer model according to various parameters(wavelength (340 nm and 477 nm), aerosol type (smoke, dust, sulfate), aerosol optical depth (AOD), surface reflectance, solar zenith angle, and viewing zenith angle). In general, it was found that O4 AMF at 477 nm is more sensitive to APH than that at 340 nm and is stably retrieved with low spectral fitting error in Differential Optical Absorption Spectroscopy (DOAS) analysis. In high AOD condition, sensitivity of O4 AMF on APH tends to increase. O4 AMF at 340 nm decreased with increasing solar zenith angle. This dependency isthought to be induced by the decrease in length of the light path where O4 absorption occurs due to the shielding effect caused by Rayleigh and Mie scattering at high solar zenith angles above 40°. At 477 nm, as the solar zenith angle increased, multiple scattering caused by Rayleigh and Mie scattering partly leads to the increase of O4 AMF in nonlinear function. Based on synthetic radiance, APHs have been retrieved using O4 AMF. Additionally, the effect of AOD uncertainty on APH retrieval error has been investigated. Among three aerosol types, APH retrieval for sulfate type is found to have the largest APH retrieval error due to uncertainty of AOD. In the case of dust aerosol, it was found that the influence of AOD uncertainty is negligible. It indicates that aerosol types affect APH retrieval error since absorption scattering characteristics of each aerosol type are various.

Object Tracking Based on Exactly Reweighted Online Total-Error-Rate Minimization (정확히 재가중되는 온라인 전체 에러율 최소화 기반의 객체 추적)

  • JANG, Se-In;PARK, Choong-Shik
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.4
    • /
    • pp.53-65
    • /
    • 2019
  • Object tracking is one of important steps to achieve video-based surveillance systems. Object tracking is considered as an essential task similar to object detection and recognition. In order to perform object tracking, various machine learning methods (e.g., least-squares, perceptron and support vector machine) can be applied for different designs of tracking systems. In general, generative methods (e.g., principal component analysis) were utilized due to its simplicity and effectiveness. However, the generative methods were only focused on modeling the target object. Due to this limitation, discriminative methods (e.g., binary classification) were adopted to distinguish the target object and the background. Among the machine learning methods for binary classification, total error rate minimization can be used as one of successful machine learning methods for binary classification. The total error rate minimization can achieve a global minimum due to a quadratic approximation to a step function while other methods (e.g., support vector machine) seek local minima using nonlinear functions (e.g., hinge loss function). Due to this quadratic approximation, the total error rate minimization could obtain appropriate properties in solving optimization problems for binary classification. However, this total error rate minimization was based on a batch mode setting. The batch mode setting can be limited to several applications under offline learning. Due to limited computing resources, offline learning could not handle large scale data sets. Compared to offline learning, online learning can update its solution without storing all training samples in learning process. Due to increment of large scale data sets, online learning becomes one of essential properties for various applications. Since object tracking needs to handle data samples in real time, online learning based total error rate minimization methods are necessary to efficiently address object tracking problems. Due to the need of the online learning, an online learning based total error rate minimization method was developed. However, an approximately reweighted technique was developed. Although the approximation technique is utilized, this online version of the total error rate minimization could achieve good performances in biometric applications. However, this method is assumed that the total error rate minimization can be asymptotically achieved when only the number of training samples is infinite. Although there is the assumption to achieve the total error rate minimization, the approximation issue can continuously accumulate learning errors according to increment of training samples. Due to this reason, the approximated online learning solution can then lead a wrong solution. The wrong solution can make significant errors when it is applied to surveillance systems. In this paper, we propose an exactly reweighted technique to recursively update the solution of the total error rate minimization in online learning manner. Compared to the approximately reweighted online total error rate minimization, an exactly reweighted online total error rate minimization is achieved. The proposed exact online learning method based on the total error rate minimization is then applied to object tracking problems. In our object tracking system, particle filtering is adopted. In particle filtering, our observation model is consisted of both generative and discriminative methods to leverage the advantages between generative and discriminative properties. In our experiments, our proposed object tracking system achieves promising performances on 8 public video sequences over competing object tracking systems. The paired t-test is also reported to evaluate its quality of the results. Our proposed online learning method can be extended under the deep learning architecture which can cover the shallow and deep networks. Moreover, online learning methods, that need the exact reweighting process, can use our proposed reweighting technique. In addition to object tracking, the proposed online learning method can be easily applied to object detection and recognition. Therefore, our proposed methods can contribute to online learning community and object tracking, detection and recognition communities.

Machine learning-based corporate default risk prediction model verification and policy recommendation: Focusing on improvement through stacking ensemble model (머신러닝 기반 기업부도위험 예측모델 검증 및 정책적 제언: 스태킹 앙상블 모델을 통한 개선을 중심으로)

  • Eom, Haneul;Kim, Jaeseong;Choi, Sangok
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.2
    • /
    • pp.105-129
    • /
    • 2020
  • This study uses corporate data from 2012 to 2018 when K-IFRS was applied in earnest to predict default risks. The data used in the analysis totaled 10,545 rows, consisting of 160 columns including 38 in the statement of financial position, 26 in the statement of comprehensive income, 11 in the statement of cash flows, and 76 in the index of financial ratios. Unlike most previous prior studies used the default event as the basis for learning about default risk, this study calculated default risk using the market capitalization and stock price volatility of each company based on the Merton model. Through this, it was able to solve the problem of data imbalance due to the scarcity of default events, which had been pointed out as the limitation of the existing methodology, and the problem of reflecting the difference in default risk that exists within ordinary companies. Because learning was conducted only by using corporate information available to unlisted companies, default risks of unlisted companies without stock price information can be appropriately derived. Through this, it can provide stable default risk assessment services to unlisted companies that are difficult to determine proper default risk with traditional credit rating models such as small and medium-sized companies and startups. Although there has been an active study of predicting corporate default risks using machine learning recently, model bias issues exist because most studies are making predictions based on a single model. Stable and reliable valuation methodology is required for the calculation of default risk, given that the entity's default risk information is very widely utilized in the market and the sensitivity to the difference in default risk is high. Also, Strict standards are also required for methods of calculation. The credit rating method stipulated by the Financial Services Commission in the Financial Investment Regulations calls for the preparation of evaluation methods, including verification of the adequacy of evaluation methods, in consideration of past statistical data and experiences on credit ratings and changes in future market conditions. This study allowed the reduction of individual models' bias by utilizing stacking ensemble techniques that synthesize various machine learning models. This allows us to capture complex nonlinear relationships between default risk and various corporate information and maximize the advantages of machine learning-based default risk prediction models that take less time to calculate. To calculate forecasts by sub model to be used as input data for the Stacking Ensemble model, training data were divided into seven pieces, and sub-models were trained in a divided set to produce forecasts. To compare the predictive power of the Stacking Ensemble model, Random Forest, MLP, and CNN models were trained with full training data, then the predictive power of each model was verified on the test set. The analysis showed that the Stacking Ensemble model exceeded the predictive power of the Random Forest model, which had the best performance on a single model. Next, to check for statistically significant differences between the Stacking Ensemble model and the forecasts for each individual model, the Pair between the Stacking Ensemble model and each individual model was constructed. Because the results of the Shapiro-wilk normality test also showed that all Pair did not follow normality, Using the nonparametric method wilcoxon rank sum test, we checked whether the two model forecasts that make up the Pair showed statistically significant differences. The analysis showed that the forecasts of the Staging Ensemble model showed statistically significant differences from those of the MLP model and CNN model. In addition, this study can provide a methodology that allows existing credit rating agencies to apply machine learning-based bankruptcy risk prediction methodologies, given that traditional credit rating models can also be reflected as sub-models to calculate the final default probability. Also, the Stacking Ensemble techniques proposed in this study can help design to meet the requirements of the Financial Investment Business Regulations through the combination of various sub-models. We hope that this research will be used as a resource to increase practical use by overcoming and improving the limitations of existing machine learning-based models.

Nonlinear Vector Alignment Methodology for Mapping Domain-Specific Terminology into General Space (전문어의 범용 공간 매핑을 위한 비선형 벡터 정렬 방법론)

  • Kim, Junwoo;Yoon, Byungho;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.28 no.2
    • /
    • pp.127-146
    • /
    • 2022
  • Recently, as word embedding has shown excellent performance in various tasks of deep learning-based natural language processing, researches on the advancement and application of word, sentence, and document embedding are being actively conducted. Among them, cross-language transfer, which enables semantic exchange between different languages, is growing simultaneously with the development of embedding models. Academia's interests in vector alignment are growing with the expectation that it can be applied to various embedding-based analysis. In particular, vector alignment is expected to be applied to mapping between specialized domains and generalized domains. In other words, it is expected that it will be possible to map the vocabulary of specialized fields such as R&D, medicine, and law into the space of the pre-trained language model learned with huge volume of general-purpose documents, or provide a clue for mapping vocabulary between mutually different specialized fields. However, since linear-based vector alignment which has been mainly studied in academia basically assumes statistical linearity, it tends to simplify the vector space. This essentially assumes that different types of vector spaces are geometrically similar, which yields a limitation that it causes inevitable distortion in the alignment process. To overcome this limitation, we propose a deep learning-based vector alignment methodology that effectively learns the nonlinearity of data. The proposed methodology consists of sequential learning of a skip-connected autoencoder and a regression model to align the specialized word embedding expressed in each space to the general embedding space. Finally, through the inference of the two trained models, the specialized vocabulary can be aligned in the general space. To verify the performance of the proposed methodology, an experiment was performed on a total of 77,578 documents in the field of 'health care' among national R&D tasks performed from 2011 to 2020. As a result, it was confirmed that the proposed methodology showed superior performance in terms of cosine similarity compared to the existing linear vector alignment.