• Title/Summary/Keyword: Local static analysis

Search Result 168, Processing Time 0.031 seconds

Wheel tread defect detection for high-speed trains using FBG-based online monitoring techniques

  • Liu, Xiao-Zhou;Ni, Yi-Qing
    • Smart Structures and Systems
    • /
    • v.21 no.5
    • /
    • pp.687-694
    • /
    • 2018
  • The problem of wheel tread defects has become a major challenge for the health management of high-speed rail as a wheel defect with small radius deviation may suffice to give rise to severe damage on both the train bogie components and the track structure when a train runs at high speeds. It is thus highly desirable to detect the defects soon after their occurrences and then conduct wheel turning for the defective wheelsets. Online wheel condition monitoring using wheel impact load detector (WILD) can be an effective solution, since it can assess the wheel condition and detect potential defects during train passage. This study aims to develop an FBG-based track-side wheel condition monitoring method for the detection of wheel tread defects. The track-side sensing system uses two FBG strain gauge arrays mounted on the rail foot, measuring the dynamic strains of the paired rails excited by passing wheelsets. Each FBG array has a length of about 3 m, slightly longer than the wheel circumference to ensure a full coverage for the detection of any potential defect on the tread. A defect detection algorithm is developed for using the online-monitored rail responses to identify the potential wheel tread defects. This algorithm consists of three steps: 1) strain data pre-processing by using a data smoothing technique to remove the trends; 2) diagnosis of novel responses by outlier analysis for the normalized data; and 3) local defect identification by a refined analysis on the novel responses extracted in Step 2. To verify the proposed method, a field test was conducted using a test train incorporating defective wheels. The train ran at different speeds on an instrumented track with the purpose of wheel condition monitoring. By using the proposed method to process the monitoring data, all the defects were identified and the results agreed well with those from the static inspection of the wheelsets in the depot. A comparison is also drawn for the detection accuracy under different running speeds of the test train, and the results show that the proposed method can achieve a satisfactory accuracy in wheel defect detection when the train runs at a speed higher than 30 kph. Some minor defects with a depth of 0.05 mm~0.06 mm are also successfully detected.

A Study on the Management of Blended Learning at School Library: Focusing on Reading Club Program Linked with Free Semester System (학교도서관의 블렌디드 러닝 운영에 관한 연구 - 자유학기제 연계 독서동아리 프로그램을 중심으로 -)

  • Song, Jiae
    • Journal of the Korean Society for Library and Information Science
    • /
    • v.55 no.2
    • /
    • pp.179-200
    • /
    • 2021
  • This study is aimed at analyzing cases of management focusing on a reading club using the blended learning at school library and the relevant programs of the free semester system. Therefore, the study has designed a research model for the blended-based management of school libraries, and cases of activities of reading clubs at school libraries for participants in programs linked with the free semester system have been analyzed. As a result of the analysis, first, the confidence level was satisfied in all areas of stability, consistency, predictability and verification on confidence level for related variables of the research model. Second, a meaningful relation has been verified in the correlation analysis between the blended activities and activities of the career search and the career design. Third, as a meaningful static effect has been shown in the contact-free activities in the areas of activities of the blended learning and activities of the career search and the career design, it was verified that programs linked with reading clubs of the free semester system have higher positive effects in the contact-free activities. Last but not least, programs related to local governments to support reading clubs at school libraries have been presented, and management of the blended learning at school libraries has been suggested.

Evaluation of Road and Traffic Information Use Efficiency on Changes in LDM-based Electronic Horizon through Microscopic Simulation Model (미시적 교통 시뮬레이션을 활용한 LDM 기반 도로·교통정보 활성화 구간 변화에 따른 정보 이용 효율성 평가)

  • Kim, Hoe Kyoung;Chung, Younshik;Park, Jaehyung
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.43 no.2
    • /
    • pp.231-238
    • /
    • 2023
  • Since there is a limit to the physically visible horizon that sensors for autonomous driving can perceive, complementary utilization of digital map data such as a Local Dynamic Map (LDM) along the probable route of an Autonomous Vehicle (AV) is proposed for safe and efficient driving. Although the amount of digital map data may be insignificant compared to the amount of information collected from the sensors of an AV, efficient management of map data is inevitable for the efficient information processing of AVs. The objective of this study is to analyze the efficiency of information use and information processing time of AV according to the expansion of the active section of LDM-based static road and traffic information. To carry out this objective, a microscopic simulator model, VISSIM and VISSIM COM, was employed, and an area of about 9 km × 13 km was selected in the Busan Metropolitan Area, which includes heterogeneous traffic flows (i.e., uninterrupted and interrupted flows) as well as various road geometries. In addition, the LDM information used in AVs refers to the real high-definition map (HDM) built on the basis of ISO 22726-1. As a result of the analysis, as the electronic horizon area increases, while short links are intensively recognized on interrupted urban roads and the sum of link lengths increases as well, the number of recognized links is relatively small on uninterrupted traffic road but the sum of link lengths is large due to a small number of long links. Therefore, this study showed that an efficient range of electronic horizon for HDM data collection, processing, and management are set as 600 m on interrupted urban roads considering the 12 links corresponding to three downstream intersections and 700 m on uninterrupted traffic road associated with the 10 km sum of link lengths, respectively.

Interactive 3D Visualization of Ceilometer Data (운고계 관측자료의 대화형 3차원 시각화)

  • Lee, Junhyeok;Ha, Wan Soo;Kim, Yong-Hyuk;Lee, Kang Hoon
    • Journal of the Korea Computer Graphics Society
    • /
    • v.24 no.2
    • /
    • pp.21-28
    • /
    • 2018
  • We present interactive methods for visualizing the cloud height data and the backscatter data collected from ceilometers in the three-dimensional virtual space. Because ceilometer data is high-dimensional, large-size data associated with both spatial and temporal information, it is highly improbable to exhibit the whole aspects of ceilometer data simply with static, two-dimensional images. Based on the three-dimensional rendering technology, our visualization methods allow the user to observe both the global variations and the local features of the three-dimensional representations of ceilometer data from various angles by interactively manipulating the timing and the view as desired. The cloud height data, coupled with the terrain data, is visualized as a realistic cloud animation in which many clouds are formed and dissipated over the terrain. The backscatter data is visualized as a three-dimensional terrain which effectively represents how the amount of backscatter changes according to the time and the altitude. Our system facilitates the multivariate analysis of ceilometer data by enabling the user to select the date to be examined, the level-of-detail of the terrain, and the additional data such as the planetary boundary layer height. We demonstrate the usefulness of our methods through various experiments with real ceilometer data collected from 93 sites scattered over the country.

Comparative analysis of two methods of laser induced boron isotopes separation

  • K.A., Lyakhov;Lee, H.J.
    • Proceedings of the Korean Vacuum Society Conference
    • /
    • 2011.02a
    • /
    • pp.407-408
    • /
    • 2011
  • Natural boron consists of two stable isotopes 10B and 11B with natural abundance of 18.8 atom percent of 10B and 81.2 atom percent of 11B. The thermal neutron absorption cross-section for 10B and 11B are 3837 barn and 0.005 barn respectively. 10B enriched specific compounds are used for control rods and as a reactor coolant additives. In this work 2 methods for boron enrichment were analysed: 1) Gas irradiation in static conditions. Dissociation occurs due to multiphoton absorption by specific isotopes in appropriately tuned laser field. IR shifted laser pulses are usually used in combination with increasing the laser intensity also improves selectivity up to some degree. In order to prevent recombination of dissociated molecules BCl3 is mixed with H2S 2) SILARC method. Advantages of this method: a) Gas cooling is helpful to split and shrink boron isotopes absorption bands. In order to achieve better selectivity BCl3 gas has to be substantially rarefied (~0.01%-5%) in mixture with carrier gas. b) Laser intensity is lower than in the first method. Some preliminary calculations of dissociation and recombination with carrier gas molecules energetics for both methods will be demonstrated Boron separation in SILARC method can be represented as multistage process: 1) Mixture of BCl3 with carrier gas is putted in reservoir 2) Gas overcooling due to expansion through Laval nozzle 3) IR multiphoton absorption by gas irradiated by specifically tuned laser field with subsequent gradual gas condensation in outlet chamber It is planned to develop software which includes these stages. This software will rely on the following available software based on quantum molecular dynamics in external quantized field: 1) WavePacket: Each particle is treated semiclassicaly based on Wigner transform method 2) Turbomole: It is based on local density methods like density of functional methods (DFT) and its improvement- coupled clusters approach (CC) to take into account quantum correlation. These models will be used to extract information concerning kinetic coefficients, and their dependence on applied external field. Information on radiative corrections to equation of state induced by laser field which take into account possible phase transition (or crossover?) can be also revealed. This mixed phase equation of state with quantum corrections will be further used in hydrodynamical simulations. Moreover results of these hydrodynamical simulations can be compared with results of CFD calculations. The first reasonable question to ask before starting the CFD simulations is whether turbulent effects are significant or not, and how to model turbulence? The questions of laser beam parameters and outlet chamber geometry which are most optimal to make all gas volume irradiated is also discussed. Relationship between enrichment factor and stagnation pressure and temperature based on experimental data is also reported.

  • PDF

Lethal Effects of Radiation and Platinum Analogues on Multicellular Spheroids of HeLa Cells (HeLa 세포의 Spheroid에 대한 방사선과 Platinum 유사체의 치사 효과)

  • Hong, Seong-Eon
    • Radiation Oncology Journal
    • /
    • v.7 no.2
    • /
    • pp.149-156
    • /
    • 1989
  • Multicellular tumor spheroids of HeLa cells have been grown in a static culture system. Samples of spheroids were exposed for 2 h to graded concentration of cis-platinum and its analogue, carboplatin, and then response assayed by survival of clonogenic cells. The purpose of present experiment is to clarify the effectiveness of these platinum compounds and to evaluate intrinsic radiosensitivity of cells using spheroids of HeLa cells as an experimental in vitro model. Variations of the drug sensitivity of monolayers as well as spheroids were also evaluated in cell-survival curves. In cis-platinum concentration-survival curve, there was a large shoulder extending as far as $Cq=3.4{\mu}M$, after which there was exponential decrease in survival curve having a Co Value of $1.2{\mu}M$ in spheroids. While the Co for the spheroids was essentially no significant change, but Cq value was larger than that of monolayers. This suggest that the effect of cis-platinum is greater En the monolayer with actively proliferaing cells than hypoxic one. In the carboplatin concentration-survival curves, the Co value of spheroids was $15.0{\mu}M$ and the ratio with the Co from monolayer cell $(32.5{\mu}M)$ was 0.40, thus indicating that the spheroids had a greater sensitivity to carboplatin than monolayers. Therefore, the effect of carboplatin is mainly on the deeper layers of spheroids acting as hypoxic cell sensitizer. The enhanced effect was obtained for monolayer cells using combined X-ray and carboplatin treatment 2 hours before irradiation. The result shown in isobologram analysis for the level of surviving fraction at 0.01 indicated that the effect of two agents was trusty supra-additive. From this experimental data, carboplatin has excited much recent interest as one of the most promising, since it is almost without nephrotoxicity and causes less gastrointestinal toxicity than cis-platinum. Interaction between carboplatin and radiation might play an important role for more effective local tumor control.

  • PDF

Analysis of Holdup Characteristics of Large and Small Bubbles in Three-Phase Fluidized Beds by using a Dynamic Gas Disengagement Method (삼상유동층에서 동력학적 기체유출 측정방법에 의한 큰 기포와 작은 기포의 체류량 특성 해석)

  • Lim, Hyun Oh;Lim, Dae Ho;Seo, Myung Jae;Kang, Yong;Jung, Heon;Lee, Ho Tae
    • Korean Chemical Engineering Research
    • /
    • v.49 no.5
    • /
    • pp.605-610
    • /
    • 2011
  • Phase holdup characteristics of relatively large and small bubbles were investigated in a three-phase(gasliquid-solid) fluidized bed of which diameter was 0.105 m(ID) and 2.5 m in height, respectively. Effects of gas(0.01~0.07 m/s) and liquid velocities(0.01~0.07 m/s) and particle size($0.5{\sim}3.0{\times}10^{-3}m$) on the holdups of relatively large and small bubbles were determined. The holdups of two kinds of bubbles in three phase fluidized beds were estimated by means of static pressure drop method with the knowledge of pressure drops corresponding to each kind of bubble, respectively, which were obtained by dynamic gas disengagement method. Dried and filtered air which was regulated by gas regulator, tap water and glass bead of which density was $2500kg/m^3$ were served as a gas, a liquid and a fluidized solid phase, respectively. The two kinds of bubbles in three-phase fluidized beds, relatively large and small bubbles, were effectively detected and distinguished by measuring the pressure drop variation after stopping the gas and liquid flow into the column as a step function: The increase slope of pressure drop with a variation of elapsed time was quite different from each other. It was found that the holdup of relatively large bubbles increased with increasing gas velocity but decreased with liquid velocity. However, the holdup showed a local minimum with a variation of size of fluidized solid particles. The holdup of relatively small bubbles increased with an increase in the gas velocity or solid particle size, while it decreased slightly with an increase in the liquid velocity. The holdups of two kinds of bubbles were well correlated in terms of operating variables within this experimental conditions, respectively.

Corporate Default Prediction Model Using Deep Learning Time Series Algorithm, RNN and LSTM (딥러닝 시계열 알고리즘 적용한 기업부도예측모형 유용성 검증)

  • Cha, Sungjae;Kang, Jungseok
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.4
    • /
    • pp.1-32
    • /
    • 2018
  • In addition to stakeholders including managers, employees, creditors, and investors of bankrupt companies, corporate defaults have a ripple effect on the local and national economy. Before the Asian financial crisis, the Korean government only analyzed SMEs and tried to improve the forecasting power of a default prediction model, rather than developing various corporate default models. As a result, even large corporations called 'chaebol enterprises' become bankrupt. Even after that, the analysis of past corporate defaults has been focused on specific variables, and when the government restructured immediately after the global financial crisis, they only focused on certain main variables such as 'debt ratio'. A multifaceted study of corporate default prediction models is essential to ensure diverse interests, to avoid situations like the 'Lehman Brothers Case' of the global financial crisis, to avoid total collapse in a single moment. The key variables used in corporate defaults vary over time. This is confirmed by Beaver (1967, 1968) and Altman's (1968) analysis that Deakins'(1972) study shows that the major factors affecting corporate failure have changed. In Grice's (2001) study, the importance of predictive variables was also found through Zmijewski's (1984) and Ohlson's (1980) models. However, the studies that have been carried out in the past use static models. Most of them do not consider the changes that occur in the course of time. Therefore, in order to construct consistent prediction models, it is necessary to compensate the time-dependent bias by means of a time series analysis algorithm reflecting dynamic change. Based on the global financial crisis, which has had a significant impact on Korea, this study is conducted using 10 years of annual corporate data from 2000 to 2009. Data are divided into training data, validation data, and test data respectively, and are divided into 7, 2, and 1 years respectively. In order to construct a consistent bankruptcy model in the flow of time change, we first train a time series deep learning algorithm model using the data before the financial crisis (2000~2006). The parameter tuning of the existing model and the deep learning time series algorithm is conducted with validation data including the financial crisis period (2007~2008). As a result, we construct a model that shows similar pattern to the results of the learning data and shows excellent prediction power. After that, each bankruptcy prediction model is restructured by integrating the learning data and validation data again (2000 ~ 2008), applying the optimal parameters as in the previous validation. Finally, each corporate default prediction model is evaluated and compared using test data (2009) based on the trained models over nine years. Then, the usefulness of the corporate default prediction model based on the deep learning time series algorithm is proved. In addition, by adding the Lasso regression analysis to the existing methods (multiple discriminant analysis, logit model) which select the variables, it is proved that the deep learning time series algorithm model based on the three bundles of variables is useful for robust corporate default prediction. The definition of bankruptcy used is the same as that of Lee (2015). Independent variables include financial information such as financial ratios used in previous studies. Multivariate discriminant analysis, logit model, and Lasso regression model are used to select the optimal variable group. The influence of the Multivariate discriminant analysis model proposed by Altman (1968), the Logit model proposed by Ohlson (1980), the non-time series machine learning algorithms, and the deep learning time series algorithms are compared. In the case of corporate data, there are limitations of 'nonlinear variables', 'multi-collinearity' of variables, and 'lack of data'. While the logit model is nonlinear, the Lasso regression model solves the multi-collinearity problem, and the deep learning time series algorithm using the variable data generation method complements the lack of data. Big Data Technology, a leading technology in the future, is moving from simple human analysis, to automated AI analysis, and finally towards future intertwined AI applications. Although the study of the corporate default prediction model using the time series algorithm is still in its early stages, deep learning algorithm is much faster than regression analysis at corporate default prediction modeling. Also, it is more effective on prediction power. Through the Fourth Industrial Revolution, the current government and other overseas governments are working hard to integrate the system in everyday life of their nation and society. Yet the field of deep learning time series research for the financial industry is still insufficient. This is an initial study on deep learning time series algorithm analysis of corporate defaults. Therefore it is hoped that it will be used as a comparative analysis data for non-specialists who start a study combining financial data and deep learning time series algorithm.