• Title/Summary/Keyword: Maintenance Model

Search Result 2,629, Processing Time 0.032 seconds

Time Trend of Occupational Noise-induced Hearing Loss in a Metallurgical Plant With a Hearing Conservation Program

  • Adalva V. Couto Lopes;Cleide F. Teixeira;Mirella B.R. Vilela;Maria L.L.T. de Lima
    • Safety and Health at Work
    • /
    • v.15 no.2
    • /
    • pp.181-186
    • /
    • 2024
  • Background: This study aimed to analyze the trend of occupational noise-induced hearing loss (ONIHL) in Brazilian workers at a metallurgical plant with a hearing conservation program (HCP), which has been addressed in a previous study. Methods: All 152 workers in this time series (20032018) participated in the HCP and used personal protective equipment. All annual audiometry records in the company's software were collected from the electronic database. The trend of ONIHL was analyzed with the joinpoint regression model. The hearing thresholds of ONIHL cases at the end of the series were compared with those found in a national reference study. Results: The binaural mean hearing thresholds at 3, 4, and 6 kHz at the end of the series were higher for ages ≥50 years, exposures ≥85 dB (A), time since admission >20 years, and maintenance workers. Significance was found only in the group divided by age. There was an increasing time trend of ONIHL, though with a low percentage variation for the period (AAPC = 3.5%; p = 0.01). Hearing thresholds in this study differed from the reference one. Conclusion: Despite the unmet expectation of a stationary trend in the study period, the time pace of ONIHL evolution did not follow what was expected for a population exposed to noise. These findings signal to the scientific community and public authorities that good ONIHL control is possible when HCP is well implemented.

Self-care Through Dynamic Appetite Alteration: A Grounded Theory Study of Patient Experience on Maintenance Hemodialysis

  • Wonsun Hwang;Ji-hyun Lee;Juha Nam;Jieun Oh;Inwhee Park;Mi Sook Cho
    • Clinical Nutrition Research
    • /
    • v.11 no.4
    • /
    • pp.264-276
    • /
    • 2022
  • Hemodialysis (HD) patients can experience appetite alterations that affect meals and nutritional status. Few qualitative studies have assessed the chronic impact of HD on the everyday diet. This study aimed to characterise comprehensively the experiences of HD patients adapting to appetite alteration. Semi-structured, face-to-face interviews were conducted in a unit of a tertiary hospital to understand patient experiences with appetite alteration. An interview guide was used to consider adaptive processes developed after reviewing the literature and based on the researchers' clinical experiences. A single researcher conducted all interviews to maintain consistency in data collection. The interview content was analysed using Nvivo 11 based on grounded theory and constant comparison analysis. As a results, the mean age and HD vintage of 14 participants were 60 and 5.8 years, respectively. We developed a self-care model based on HD patient experiences with appetite alteration based on axial and selective coding. Differences in urea sensitivity, taste alteration, and social support could be explained by timing of transitions, life events, and responses to stress. Self-care processes are adapted through the processes of "self-registration" and "self-reconstruction," starting with "disruption." At the stage of adjustment, 4 self-management types were derived based on pattern of self-care: self-initiator, follower, realist, and pessimist. The results of this study provide unique qualitative insight into the lived experiences of HD patients experiencing appetite alteration and their self-care processes. By recognising dietary challenges, health teams can better support HD patients in the transition from dietary education to self-care.

A novel approach for the definition and detection of structural irregularity in reinforced concrete buildings

  • S.P. Akshara;M. Abdul Akbar;T.M. Madhavan Pillai;Renil Sabhadiya;Rakesh Pasunuti
    • Structural Monitoring and Maintenance
    • /
    • v.11 no.2
    • /
    • pp.101-126
    • /
    • 2024
  • To avoid irregularities in buildings, design codes worldwide have introduced detailed guidelines for their check and rectification. However, the criteria used to define and identify each of the plan and vertical irregularities are specific and may vary between codes of different countries, thus making their implementation difficult. This short communication paper proposes a novel approach for quantifying different types of structural irregularities using a common parameter named as unified identification factor, which is exclusively defined for the columns based on their axial loads and tributary areas. The calculation of the identification factor is demonstrated through the analysis of rectangular and circular reinforced concrete models using ETABS v18.0.2, which are further modified to generate plan irregular (torsional irregularity, cut-out in floor slab and non-parallel lateral force system) and vertical irregular (mass irregularity, vertical geometric irregularity and floating columns) models. The identification factor is calculated for all the columns of a building and the range within which the value lies is identified. The results indicate that the range will be very wide for an irregular building when compared to that with a regular configuration, thus implying a strong correlation of the identification factor with the structural irregularity. Further, the identification factor is compared for different columns within a floor and between floors for each building model. The findings suggest that the value will be abnormally high or low for a column in the vicinity of an irregularity. The proposed factor could thus be used in the preliminary structural design phase, so as to eliminate the complications that might arise due to the geometry of the structure when subjected to lateral loads. The unified approach could also be incorporated in future revisions of codes, as a replacement for the numerous criteria currently used for classifying different types of irregularities.

Comparative research on gravity load simulation devices for structural seismic tests based on FEA

  • Yonglan Xie;Songtao Yan;Yurong Wang;Shuwei Song
    • Structural Monitoring and Maintenance
    • /
    • v.11 no.3
    • /
    • pp.235-246
    • /
    • 2024
  • Structural seismic tests usually need to simulate the gravity load borne by the structure, the gravity load application devices should keep the force value and direction unchanged, and can adapt to the structural deformation. At present, there are two main ways to simulate gravity load in laboratory: roller group and prestress. However, there are few differential analysis between these two ways in the existing experimental studies. In this paper, the simulation software ABAQUS is used to simulate the static pushover analysis of reinforced concrete column and frame, which are the most common models in structural seismic tests. The results show that the horizontal restoring force of the model using prestressed loading method is significantly greater than roller group, and the difference between the two will increase with the increase of the horizontal deformation. The reason for the difference is that the prestressed loading method does not take the adverse effects of gravity second-order effect (P-Delta effect) into account. Therefore, the restoring force obtained under prestressed loading method should be corrected and the additional shear force caused by P-Delta effect should be deducted. After correction, the difference of restoring force between the two gravity load application methods is significantly reduced (when storey-drift is 1/550, the relative error is within 1%; and when storey-drift is 1/50, the relative error is about 3%). The research results of this research can provide reference for the selection and data processing of gravity load simulation devices in structural seismic tests.

Preliminary Economic Analysis of 20 MW Super-Capacity Wind Turbine Generator in the East Sea of Korea (국내 동해지역 20 MW급 초대용량 풍력발전시스템 사전 경제성 분석)

  • Jun-Young Lee;Seo-Yoon Choi;Rae-Hyoung Yuck;Kwang-Tae Ha;Jae-ho Jeong
    • Journal of Wind Energy
    • /
    • v.13 no.4
    • /
    • pp.50-57
    • /
    • 2022
  • Renewable energy is emerging as a way for the government to carry out its 2030 carbon-neutral policy. In this regard, the demand for wind turbine generators for renewable energy is increasing. As a result of restrictions due to civil complaints, offshore wind power generators are actively being developed. At this time, offshore wind power generation has higher maintenance costs, material costs, and installation costs compared to onshore wind power generation. So, an economic evaluation that calculates imports and costs is an important task. The levelized cost of energy (LCOE) is an economic evaluation index used in the energy field. In this paper, based on AEP calculated by windpro, the LCOE calculated by the wind power cost estimation model published in the NREL Economic Analysis Report, installing one 15 MW unit and installing one 20 MW unit and seven units were reviewed and analyzed. As a result, AEP was calculated as 0.140($/Kwh) for the installation of a single 15 MW, 0.142($/Kwh) for the installation of a single 20 MW, and 0.119 ($/Kwh) for the installation of a 20 MW farm. Therefore, it was confirmed that the installation of the single 20 MW was more economical than the installation of the single 15 MW and the installation of the 20 MW farm was most economical.

Simulated Analysis on Cooling System Performance Influenced by Faults Occurred in Enthalpy Sensor for Economizer Control

  • Minho KIM;San JIN;Ahmin JANG;Beungyong PARK;Sung Lok DO
    • International conference on construction engineering and project management
    • /
    • 2024.07a
    • /
    • pp.1002-1009
    • /
    • 2024
  • An economizer control is used for cooling a building by modulating outdoor air(OA) intake rate according to measured OA conditions. An OA enthalpy sensor can be faulty during the operating after installation. The sensor mainly is fault in the form of offset. It leads value differences between measured enthalpy and actual enthalpy. The enthalpy differences occurred by the faulty sensor may result in more OA intake or less OA intake than designed OA intake value. The unwanted amount of OA intake negatively affects cooling system performance, especially cooling energy consumption. Therefore, this study analyzed cooling system performance resulted from occurring the faulty sensor in economizer enthalpy control. To conduct the analysis, this study utilized the Fault model in EnergyPlus, a building energy simulation tool. As a result of the analysis, the faulty sensor with positive offset intaked less OA amount than the available OA amount. It lead more cooling energy consumed by cooling equipment such as chiller and circulation pump. On the other hand, the faulty sensor with negative offset intaked more unnecessary OA amount than the required OA amount. It also lead more cooling energy consumption in the cooling equipment. Based on the resultant analysis, this study suggests continuous maintenance and diagnosis for an enthalpy sensor used in the economizer system. It may allow proper operation control for the economizer system, and thus the maximum cooling energy saving can be achieved.

Assessment of Fire-Damaged Mortar using Color image Analysis (색도 이미지 분석을 이용한 화재 피해 모르타르의 손상 평가)

  • Park, Kwang-Min;Lee, Byung-Do;Yoo, Sung-Hun;Ham, Nam-Hyuk;Roh, Young-Sook
    • Journal of the Korea institute for structural maintenance and inspection
    • /
    • v.23 no.3
    • /
    • pp.83-91
    • /
    • 2019
  • The purpose of this study is to assess a fire-damaged concrete structure using a digital camera and image processing software. To simulate it, mortar and paste samples of W/C=0.5(general strength) and 0.3(high strength) were put into an electric furnace and simulated from $100^{\circ}C$ to $1000^{\circ}C$. Here, the paste was processed into a powder to measure CIELAB chromaticity, and the samples were taken with a digital camera. The RGB chromaticity was measured by color intensity analyzer software. As a result, the residual compressive strength of W/C=0.5 and 0.3 was 87.2 % and 86.7 % at the heating temperature of $400^{\circ}C$. However there was a sudden decrease in strength at the temperature above $500^{\circ}C$, while the residual compressive strength of W/C=0.5 and 0.3 was 55.2 % and 51.9 % of residual strength. At the temperature $700^{\circ}C$ or higher, W/C=0.5 and W/C=0.3 show 26.3% and 27.8% of residual strength, so that the durability of the structure could not be secured. The results of $L^*a^*b$ color analysis show that $b^*$ increases rapidly after $700^{\circ}C$. It is analyzed that the intensity of yellow becomes strong after $700^{\circ}C$. Further, the RGB analysis found that the histogram kurtosis and frequency of Red and Green increases after $700^{\circ}C$. It is analyzed that number of Red and Green pixels are increased. Therefore, it is deemed possible to estimate the degree of damage by checking the change in yellow($b^*$ or R+G) when analyzing the chromaticity of the fire-damaged concrete structures.

Assessment of water supply reliability in the Geum River Basin using univariate climate response functions: a case study for changing instreamflow managements (단변량 기후반응함수를 이용한 금강수계 이수안전도 평가: 하천유지유량 관리 변화를 고려한 사례연구)

  • Kim, Daeha;Choi, Si Jung;Jang, Su Hyung;Kang, Dae Hu
    • Journal of Korea Water Resources Association
    • /
    • v.56 no.12
    • /
    • pp.993-1003
    • /
    • 2023
  • Due to the increasing greenhouse gas emissions, the global mean temperature has risen by 1.1℃ compared to pre-industrial levels, and significant changes are expected in functioning of water supply systems. In this study, we assessed impacts of climate change and instreamflow management on water supply reliability in the Geum River basin, Korea. We proposed univariate climate response functions, where mean precipitation and potential evaporation were coupled as an explanatory variable, to assess impacts of climate stress on multiple water supply reliabilities. To this end, natural streamflows were generated in the 19 sub-basins with the conceptual GR6J model. Then, the simulated streamflows were input into the Water Evaluation And Planning (WEAP) model. The dynamic optimization by WEAP allowed us to assess water supply reliability against the 2020 water demand projections. Results showed that when minimizing the water shortage of the entire river basin under the 1991-2020 climate, water supply reliability was lowest in the Bocheongcheon among the sub-basins. In a scenario where the priority of instreamflow maintenance is adjusted to be the same as municipal and industrial water use, water supply reliability in the Bocheongcheon, Chogang, and Nonsancheon sub-basins significantly decreased. The stress tests with 325 sets of climate perturbations showed that water supply reliability in the three sub-basins considerably decreased under all the climate stresses, while the sub-basins connected to large infrastructures did not change significantly. When using the 2021-2050 climate projections with the stress test results, water supply reliability in the Geum River basin was expected to generally improve, but if the priority of instreamflow maintenance is increased, water shortage is expected to worsen in geographically isolated sub-basins. Here, we suggest that the climate response function can be established by a single explanatory variable to assess climate change impacts of many sub-basin's performance simultaneously.

A Study of Anomaly Detection for ICT Infrastructure using Conditional Multimodal Autoencoder (ICT 인프라 이상탐지를 위한 조건부 멀티모달 오토인코더에 관한 연구)

  • Shin, Byungjin;Lee, Jonghoon;Han, Sangjin;Park, Choong-Shik
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.3
    • /
    • pp.57-73
    • /
    • 2021
  • Maintenance and prevention of failure through anomaly detection of ICT infrastructure is becoming important. System monitoring data is multidimensional time series data. When we deal with multidimensional time series data, we have difficulty in considering both characteristics of multidimensional data and characteristics of time series data. When dealing with multidimensional data, correlation between variables should be considered. Existing methods such as probability and linear base, distance base, etc. are degraded due to limitations called the curse of dimensions. In addition, time series data is preprocessed by applying sliding window technique and time series decomposition for self-correlation analysis. These techniques are the cause of increasing the dimension of data, so it is necessary to supplement them. The anomaly detection field is an old research field, and statistical methods and regression analysis were used in the early days. Currently, there are active studies to apply machine learning and artificial neural network technology to this field. Statistically based methods are difficult to apply when data is non-homogeneous, and do not detect local outliers well. The regression analysis method compares the predictive value and the actual value after learning the regression formula based on the parametric statistics and it detects abnormality. Anomaly detection using regression analysis has the disadvantage that the performance is lowered when the model is not solid and the noise or outliers of the data are included. There is a restriction that learning data with noise or outliers should be used. The autoencoder using artificial neural networks is learned to output as similar as possible to input data. It has many advantages compared to existing probability and linear model, cluster analysis, and map learning. It can be applied to data that does not satisfy probability distribution or linear assumption. In addition, it is possible to learn non-mapping without label data for teaching. However, there is a limitation of local outlier identification of multidimensional data in anomaly detection, and there is a problem that the dimension of data is greatly increased due to the characteristics of time series data. In this study, we propose a CMAE (Conditional Multimodal Autoencoder) that enhances the performance of anomaly detection by considering local outliers and time series characteristics. First, we applied Multimodal Autoencoder (MAE) to improve the limitations of local outlier identification of multidimensional data. Multimodals are commonly used to learn different types of inputs, such as voice and image. The different modal shares the bottleneck effect of Autoencoder and it learns correlation. In addition, CAE (Conditional Autoencoder) was used to learn the characteristics of time series data effectively without increasing the dimension of data. In general, conditional input mainly uses category variables, but in this study, time was used as a condition to learn periodicity. The CMAE model proposed in this paper was verified by comparing with the Unimodal Autoencoder (UAE) and Multi-modal Autoencoder (MAE). The restoration performance of Autoencoder for 41 variables was confirmed in the proposed model and the comparison model. The restoration performance is different by variables, and the restoration is normally well operated because the loss value is small for Memory, Disk, and Network modals in all three Autoencoder models. The process modal did not show a significant difference in all three models, and the CPU modal showed excellent performance in CMAE. ROC curve was prepared for the evaluation of anomaly detection performance in the proposed model and the comparison model, and AUC, accuracy, precision, recall, and F1-score were compared. In all indicators, the performance was shown in the order of CMAE, MAE, and AE. Especially, the reproduction rate was 0.9828 for CMAE, which can be confirmed to detect almost most of the abnormalities. The accuracy of the model was also improved and 87.12%, and the F1-score was 0.8883, which is considered to be suitable for anomaly detection. In practical aspect, the proposed model has an additional advantage in addition to performance improvement. The use of techniques such as time series decomposition and sliding windows has the disadvantage of managing unnecessary procedures; and their dimensional increase can cause a decrease in the computational speed in inference.The proposed model has characteristics that are easy to apply to practical tasks such as inference speed and model management.

A Study on Damage factor Analysis of Slope Anchor based on 3D Numerical Model Combining UAS Image and Terrestrial LiDAR (UAS 영상 및 지상 LiDAR 조합한 3D 수치모형 기반 비탈면 앵커의 손상인자 분석에 관한 연구)

  • Lee, Chul-Hee;Lee, Jong-Hyun;Kim, Dal-Joo;Kang, Joon-Oh;Kwon, Young-Hun
    • Journal of the Korean Geotechnical Society
    • /
    • v.38 no.7
    • /
    • pp.5-24
    • /
    • 2022
  • The current performance evaluation of slope anchors qualitatively determines the physical bonding between the anchor head and ground as well as cracks or breakage of the anchor head. However, such performance evaluation does not measure these primary factors quantitatively. Therefore, the time-dependent management of the anchors is almost impossible. This study is an evaluation of the 3D numerical model by SfM which combines UAS images with terrestrial LiDAR to collect numerical data on the damage factors. It also utilizes the data for the quantitative maintenance of the anchor system once it is installed on slopes. The UAS 3D model, which often shows relatively low precision in the z-coordinate for vertical objects such as slopes, is combined with terrestrial LiDAR scan data to improve the accuracy of the z-coordinate measurement. After validating the system, a field test is conducted with ten anchors installed on a slope with arbitrarily damaged heads. The damages (such as cracks, breakages, and rotational displacements) are detected and numerically evaluated through the orthogonal projection of the measurement system. The results show that the introduced system at the resolution of 8K can detect cracks less than 0.3 mm in any aperture with an error range of 0.05 mm. Also, the system can successfully detect the volume of the damaged part, showing that the maximum damage area of the anchor head was within 3% of the original design guideline. Originally, the ground adhesion to the anchor head, where the z-coordinate is highly relevant, was almost impossible to measure with the UAS 3D numerical model alone because of its blind spots. However, by applying the combined system, elevation differences between the anchor bottom and the irregular ground surface was identified so that the average value at 20 various locations was calculated for the ground adhesion. Additionally, rotation angle and displacement of the anchor head less than 1" were detected. From the observations, the validity of the 3D numerical model can obtain quantitative data on anchor damage. Such data collection can potentially create a database that could be used as a fundamental resource for quantitative anchor damage evaluation in the future.