• Title/Summary/Keyword: Database model

Search Result 2,989, Processing Time 0.032 seconds

Time Trend of Occupational Noise-induced Hearing Loss in a Metallurgical Plant With a Hearing Conservation Program

  • Adalva V. Couto Lopes;Cleide F. Teixeira;Mirella B.R. Vilela;Maria L.L.T. de Lima
    • Safety and Health at Work
    • /
    • v.15 no.2
    • /
    • pp.181-186
    • /
    • 2024
  • Background: This study aimed to analyze the trend of occupational noise-induced hearing loss (ONIHL) in Brazilian workers at a metallurgical plant with a hearing conservation program (HCP), which has been addressed in a previous study. Methods: All 152 workers in this time series (20032018) participated in the HCP and used personal protective equipment. All annual audiometry records in the company's software were collected from the electronic database. The trend of ONIHL was analyzed with the joinpoint regression model. The hearing thresholds of ONIHL cases at the end of the series were compared with those found in a national reference study. Results: The binaural mean hearing thresholds at 3, 4, and 6 kHz at the end of the series were higher for ages ≥50 years, exposures ≥85 dB (A), time since admission >20 years, and maintenance workers. Significance was found only in the group divided by age. There was an increasing time trend of ONIHL, though with a low percentage variation for the period (AAPC = 3.5%; p = 0.01). Hearing thresholds in this study differed from the reference one. Conclusion: Despite the unmet expectation of a stationary trend in the study period, the time pace of ONIHL evolution did not follow what was expected for a population exposed to noise. These findings signal to the scientific community and public authorities that good ONIHL control is possible when HCP is well implemented.

Reduced risk of gastrointestinal bleeding associated with eupatilin in aspirin plus acid suppressant users: nationwide population-based study

  • Hyun Seok Lee;Ji Hyung Nam;Dong Jun Oh;Yeo Rae Moon;Yun Jeong Lim
    • The Korean journal of internal medicine
    • /
    • v.39 no.2
    • /
    • pp.261-271
    • /
    • 2024
  • Background/Aims: Mucoprotective agents, such as eupatilin, are often prescribed to prevent gastrointestinal (GI) bleeding in addition to an acid suppressant despite the absence of a large-scale study. We evaluated the additional effect of eupatilin on the prevention of GI bleeding in both the upper and lower GI tract in concomitant aspirin and acid suppressant users using the nationwide database of national claims data from the Korean National Health Insurance Service (NHIS). Methods: An aspirin cohort was constructed using the NHIS claims data from 2013 to 2020. Patients who manifested with hematemesis, melena, or hematochezia were considered to have GI bleeding. A Cox proportional hazards regression model was used to determine the risk factors for GI bleeding associated with the concomitant use of GI drugs and other covariates among aspirin users. Results: Overall, a total of 432,208 aspirin users were included. The concurrent use of an acid suppressant and eupatilin (hazard ratio [HR] = 0.85, p = 0.016, vs. acid suppressant only) was a statistically significant preventive factor for GI bleeding. Moreover, a more than 3-month duration (HR = 0.88, p = 0.030) of acid suppressant and eupatilin prescription (vs. acid suppressant only) was a statistically significant preventive factor for GI bleeding. Conclusions: Eupatilin administration for ≥ 3 months showed additional preventive effect on GI bleeding in concomitant aspirin and acid suppressant users. Thus, cotreatment with eupatilin with a duration of 3 months or longer is recommended for reducing GI bleeding among aspirin plus acid suppressant users.

The development of four efficient optimal neural network methods in forecasting shallow foundation's bearing capacity

  • Hossein Moayedi;Binh Nguyen Le
    • Computers and Concrete
    • /
    • v.34 no.2
    • /
    • pp.151-168
    • /
    • 2024
  • This research aimed to appraise the effectiveness of four optimization approaches - cuckoo optimization algorithm (COA), multi-verse optimization (MVO), particle swarm optimization (PSO), and teaching-learning-based optimization (TLBO) - that were enhanced with an artificial neural network (ANN) in predicting the bearing capacity of shallow foundations located on cohesionless soils. The study utilized a database of 97 laboratory experiments, with 68 experiments for training data sets and 29 for testing data sets. The ANN algorithms were optimized by adjusting various variables, such as population size and number of neurons in each hidden layer, through trial-and-error techniques. Input parameters used for analysis included width, depth, geometry, unit weight, and angle of shearing resistance. After performing sensitivity analysis, it was determined that the optimized architecture for the ANN structure was 5×5×1. The study found that all four models demonstrated exceptional prediction performance: COA-MLP, MVO-MLP, PSO-MLP, and TLBO-MLP. It is worth noting that the MVO-MLP model exhibited superior accuracy in generating network outputs for predicting measured values compared to the other models. The training data sets showed R2 and RMSE values of (0.07184 and 0.9819), (0.04536 and 0.9928), (0.09194 and 0.9702), and (0.04714 and 0.9923) for COA-MLP, MVO-MLP, PSO-MLP, and TLBO-MLP methods respectively. Similarly, the testing data sets produced R2 and RMSE values of (0.08126 and 0.07218), (0.07218 and 0.9814), (0.10827 and 0.95764), and (0.09886 and 0.96481) for COA-MLP, MVO-MLP, PSO-MLP, and TLBO-MLP methods respectively.

The Analysis of Research Trends in Electric Vehicle using Topic Modeling (토픽 모델링을 이용한 전기차 연구 동향 분석)

  • Yuan Chen;Seok-Swoo Cho
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.17 no.4
    • /
    • pp.255-265
    • /
    • 2024
  • To address environmental challenges and improve energy efficiency, the adoption of electric vehicles has led to a surge in related research. However, to comprehensively understand the research trends within the field of electric vehicles, it is necessary to systematically analyze vast amounts of data. This study systematically analyzed research trends in the field of electric vehicles and identified key research topics through LDA topic modeling, based on 36,519 papers related to electric vehicles collected from the SCIE database. The data analysis revealed a total of 10 major topics, of which three were identified as hot topics showing an upward trend: Electric Vehicle Charging Infrastructure, Energy and Environmental Policy, and Optimization and Algorithms. Conversely, five topics were identified as cold topics exhibiting a downward trend: Battery Temperature and Cooling, Battery Materials and Chemistry, Motor and Mechanical Design, Control Strategies and Systems, and Battery Components and Materials. This study provides basic data for understanding the current research trends in electric vehicles and offers valuable information for researchers in selecting research topics related to electric vehicles.

A Study on The Conversion Factor between Heterogeneous DBMS for Cloud Migration

  • Joonyoung Ahn;Kijung Ryu;Changik Oh;Taekryong Han;Heewon Kim;Dongho Kim
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.18 no.8
    • /
    • pp.2450-2463
    • /
    • 2024
  • Many legacy information systems are currently being clouded. This is due to the advantage of being able to respond flexibly to the changes in user needs and system environment while reducing the initial investment cost of IT infrastructure such as servers and storage. The infrastructure of the information system migrated to the cloud is being integrated through the API connections, while being subdivided by using MSA (Micro Service Architecture) internally. DBMS (Database Management System) is also becoming larger after cloud migration. Scale calculation in most layers of the application architecture can be measured and calculated from auto-scaling perspective, but the method of hardware scale calculation for DBMS has not been established as standardized methodology. If there is an error in hardware scale calculation of DBMS, problems such as poor performance of the information system or excessive auto-scaling may occur. In addition, evaluating hardware size is more crucial because it also affects the financial cost of the migration. CPU is the factor that has the greatest influence on hardware scale calculation of DBMS. Therefore, this paper aims to calculate the conversion factor for CPU scale calculation that will facilitate the cloud migration between heterogeneous DBMS. In order to do that, we utilize the concept and definition of hardware capacity planning and scale calculation in the on-premise information system. The methods to calculate the conversion factor using TPC-H tests are proposed and verified. In the future, further research and testing should be conducted on the size of the segmented CPU and more heterogeneous DBMS to demonstrate the effectiveness of the proposed test model.

Application Plan of Goods Information in the Public Procurement Service for Enhancing U-City Plans (U-City계획 고도화를 위한 조달청 물품정보 활용 방안 : CCTV 사례를 중심으로)

  • PARK, Jun-Ho;PARK, Jeong-Woo;NAM, Kwang-Woo
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.18 no.3
    • /
    • pp.21-34
    • /
    • 2015
  • In this study, a reference model is constructed that provides architects or designers with sufficient information on the intelligent service facility that is essential for U-City space configuration, and for the support of enhanced design, as well as for planning activities. At the core of the reference model is comprehensive information about the intelligent service facility that plans the content of services, and the latest related information that is regularly updated. A plan is presented to take advantage of the database of list information systems in the Public Procurement Service that handles intelligent service facilities. We suggest a number of improvements by analyzing the current status of, and issues with, the goods information in the Public Procurement Service, and by conducting a simulation for the proper placement of CCTV. As the design of U-City plan has evolved from IT technology-based to smart space-based, reviews of limitations such as the lack of standards, information about the installation, and the placement of the intelligent service facility that provides U-service have been carried out. Due to the absence of relevant legislation and guidelines, however, planning activities, such as the appropriate placement of the intelligent service facility are difficult when considering efficient service provision. In addition, with the lack of information about IT technology and intelligent service facilities that can be provided to U-City planners and designers, there are a number of difficulties when establishing an optimal plan with respect to service level and budget. To solve these problems, this study presents a plan in conjunction with the goods information from the Public Procurement Service. The Public Procurement Service has already built an industry-related database of around 260,000 cases, which has been continually updated. It can be a very useful source of information about the intelligent service facility, the ever-changing U-City industry's core, and the relevant technologies. However, since providing this information is insufficient in the application process and, due to the constraints in the information disclosure process, there have been some issues in its application. Therefore, this study, by presenting an improvement plan for the linkage and application of the goods information in the Public Procurement Service, has significance for the provision of the basic framework for future U-City enhancement plans, and multi-departments' common utilization of the goods information in the Public Procurement Service.

A Study on Dose-Response Models for Foodborne Disease Pathogens (주요 식중독 원인 미생물들에 대한 용량-반응 모델 연구)

  • Park, Myoung Su;Cho, June Ill;Lee, Soon Ho;Bahk, Gyung Jin
    • Journal of Food Hygiene and Safety
    • /
    • v.29 no.4
    • /
    • pp.299-304
    • /
    • 2014
  • The dose-response models are important for the quantitative microbiological risk assessment (QMRA) because they would enable prediction of infection risk to humans from foodborne pathogens. In this study, we performed a comprehensive literature review and meta-analysis to better quantify this association. The meta-analysis applied a final selection of 193 published papers for total 43 species foodborne disease pathogens (bacteria 26, virus 9, and parasite 8 species) which were identified and classified based on the dose-response models related to QMRA studies from PubMed, ScienceDirect database and internet websites during 1980-2012. The main search keywords used the combination "food", "foodborne disease pathogen", "dose-response model", and "quantitative microbiological risk assessment". The appropriate dose-response models for Campylobacter jejuni, pathogenic E. coli O157:H7 (EHEC / EPEC / ETEC), Listeria monocytogenes, Salmonella spp., Shigella spp., Staphylococcus aureus, Vibrio parahaemolyticus, Vibrio cholera, Rota virus, and Cryptosporidium pavum were beta-poisson (${\alpha}=0.15$, ${\beta}=7.59$, fi = 0.72), beta-poisson (${\alpha}=0.49$, ${\beta}=1.81{\times}10^5$, fi = 0.67) / beta-poisson (${\alpha}=0.22$, ${\beta}=8.70{\times}10^3$, fi = 0.40) / beta-poisson (${\alpha}=0.18$, ${\beta}=8.60{\times}10^7$, fi = 0.60), exponential (r=$1.18{\times}10^{-10}$, fi = 0.14), beta-poisson (${\alpha}=0.11$, ${\beta}=6,097$, fi = 0.09), beta-poisson (${\alpha}=0.21$, ${\beta}=1,120$, fi = 0.15), exponential ($r=7.64{\times}10^{-8}$, fi = 1.00), betapoisson (${\alpha}=0.17$, ${\beta}=1.18{\times}10^5$, fi = 1.00), beta-poisson (${\alpha}=0.25$, ${\beta}=16.2$, fi = 0.57), exponential ($r=1.73{\times}10{-2}$, fi = 1.00), and exponential ($r=1.73{\times}10^{-2}$, fi = 0.17), respectively. Therefore, these results provide the preliminary data necessary for the development of foodborne pathogens QMRA.

Development of Sentiment Analysis Model for the hot topic detection of online stock forums (온라인 주식 포럼의 핫토픽 탐지를 위한 감성분석 모형의 개발)

  • Hong, Taeho;Lee, Taewon;Li, Jingjing
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.1
    • /
    • pp.187-204
    • /
    • 2016
  • Document classification based on emotional polarity has become a welcomed emerging task owing to the great explosion of data on the Web. In the big data age, there are too many information sources to refer to when making decisions. For example, when considering travel to a city, a person may search reviews from a search engine such as Google or social networking services (SNSs) such as blogs, Twitter, and Facebook. The emotional polarity of positive and negative reviews helps a user decide on whether or not to make a trip. Sentiment analysis of customer reviews has become an important research topic as datamining technology is widely accepted for text mining of the Web. Sentiment analysis has been used to classify documents through machine learning techniques, such as the decision tree, neural networks, and support vector machines (SVMs). is used to determine the attitude, position, and sensibility of people who write articles about various topics that are published on the Web. Regardless of the polarity of customer reviews, emotional reviews are very helpful materials for analyzing the opinions of customers through their reviews. Sentiment analysis helps with understanding what customers really want instantly through the help of automated text mining techniques. Sensitivity analysis utilizes text mining techniques on text on the Web to extract subjective information in the text for text analysis. Sensitivity analysis is utilized to determine the attitudes or positions of the person who wrote the article and presented their opinion about a particular topic. In this study, we developed a model that selects a hot topic from user posts at China's online stock forum by using the k-means algorithm and self-organizing map (SOM). In addition, we developed a detecting model to predict a hot topic by using machine learning techniques such as logit, the decision tree, and SVM. We employed sensitivity analysis to develop our model for the selection and detection of hot topics from China's online stock forum. The sensitivity analysis calculates a sentimental value from a document based on contrast and classification according to the polarity sentimental dictionary (positive or negative). The online stock forum was an attractive site because of its information about stock investment. Users post numerous texts about stock movement by analyzing the market according to government policy announcements, market reports, reports from research institutes on the economy, and even rumors. We divided the online forum's topics into 21 categories to utilize sentiment analysis. One hundred forty-four topics were selected among 21 categories at online forums about stock. The posts were crawled to build a positive and negative text database. We ultimately obtained 21,141 posts on 88 topics by preprocessing the text from March 2013 to February 2015. The interest index was defined to select the hot topics, and the k-means algorithm and SOM presented equivalent results with this data. We developed a decision tree model to detect hot topics with three algorithms: CHAID, CART, and C4.5. The results of CHAID were subpar compared to the others. We also employed SVM to detect the hot topics from negative data. The SVM models were trained with the radial basis function (RBF) kernel function by a grid search to detect the hot topics. The detection of hot topics by using sentiment analysis provides the latest trends and hot topics in the stock forum for investors so that they no longer need to search the vast amounts of information on the Web. Our proposed model is also helpful to rapidly determine customers' signals or attitudes towards government policy and firms' products and services.

The Prediction of Purchase Amount of Customers Using Support Vector Regression with Separated Learning Method (Support Vector Regression에서 분리학습을 이용한 고객의 구매액 예측모형)

  • Hong, Tae-Ho;Kim, Eun-Mi
    • Journal of Intelligence and Information Systems
    • /
    • v.16 no.4
    • /
    • pp.213-225
    • /
    • 2010
  • Data mining has empowered the managers who are charge of the tasks in their company to present personalized and differentiated marketing programs to their customers with the rapid growth of information technology. Most studies on customer' response have focused on predicting whether they would respond or not for their marketing promotion as marketing managers have been eager to identify who would respond to their marketing promotion. So many studies utilizing data mining have tried to resolve the binary decision problems such as bankruptcy prediction, network intrusion detection, and fraud detection in credit card usages. The prediction of customer's response has been studied with similar methods mentioned above because the prediction of customer's response is a kind of dichotomous decision problem. In addition, a number of competitive data mining techniques such as neural networks, SVM(support vector machine), decision trees, logit, and genetic algorithms have been applied to the prediction of customer's response for marketing promotion. The marketing managers also have tried to classify their customers with quantitative measures such as recency, frequency, and monetary acquired from their transaction database. The measures mean that their customers came to purchase in recent or old days, how frequent in a period, and how much they spent once. Using segmented customers we proposed an approach that could enable to differentiate customers in the same rating among the segmented customers. Our approach employed support vector regression to forecast the purchase amount of customers for each customer rating. Our study used the sample that included 41,924 customers extracted from DMEF04 Data Set, who purchased at least once in the last two years. We classified customers from first rating to fifth rating based on the purchase amount after giving a marketing promotion. Here, we divided customers into first rating who has a large amount of purchase and fifth rating who are non-respondents for the promotion. Our proposed model forecasted the purchase amount of the customers in the same rating and the marketing managers could make a differentiated and personalized marketing program for each customer even though they were belong to the same rating. In addition, we proposed more efficient learning method by separating the learning samples. We employed two learning methods to compare the performance of proposed learning method with general learning method for SVRs. LMW (Learning Method using Whole data for purchasing customers) is a general learning method for forecasting the purchase amount of customers. And we proposed a method, LMS (Learning Method using Separated data for classification purchasing customers), that makes four different SVR models for each class of customers. To evaluate the performance of models, we calculated MAE (Mean Absolute Error) and MAPE (Mean Absolute Percent Error) for each model to predict the purchase amount of customers. In LMW, the overall performance was 0.670 MAPE and the best performance showed 0.327 MAPE. Generally, the performances of the proposed LMS model were analyzed as more superior compared to the performance of the LMW model. In LMS, we found that the best performance was 0.275 MAPE. The performance of LMS was higher than LMW in each class of customers. After comparing the performance of our proposed method LMS to LMW, our proposed model had more significant performance for forecasting the purchase amount of customers in each class. In addition, our approach will be useful for marketing managers when they need to customers for their promotion. Even if customers were belonging to same class, marketing managers could offer customers a differentiated and personalized marketing promotion.

A Study on Damage factor Analysis of Slope Anchor based on 3D Numerical Model Combining UAS Image and Terrestrial LiDAR (UAS 영상 및 지상 LiDAR 조합한 3D 수치모형 기반 비탈면 앵커의 손상인자 분석에 관한 연구)

  • Lee, Chul-Hee;Lee, Jong-Hyun;Kim, Dal-Joo;Kang, Joon-Oh;Kwon, Young-Hun
    • Journal of the Korean Geotechnical Society
    • /
    • v.38 no.7
    • /
    • pp.5-24
    • /
    • 2022
  • The current performance evaluation of slope anchors qualitatively determines the physical bonding between the anchor head and ground as well as cracks or breakage of the anchor head. However, such performance evaluation does not measure these primary factors quantitatively. Therefore, the time-dependent management of the anchors is almost impossible. This study is an evaluation of the 3D numerical model by SfM which combines UAS images with terrestrial LiDAR to collect numerical data on the damage factors. It also utilizes the data for the quantitative maintenance of the anchor system once it is installed on slopes. The UAS 3D model, which often shows relatively low precision in the z-coordinate for vertical objects such as slopes, is combined with terrestrial LiDAR scan data to improve the accuracy of the z-coordinate measurement. After validating the system, a field test is conducted with ten anchors installed on a slope with arbitrarily damaged heads. The damages (such as cracks, breakages, and rotational displacements) are detected and numerically evaluated through the orthogonal projection of the measurement system. The results show that the introduced system at the resolution of 8K can detect cracks less than 0.3 mm in any aperture with an error range of 0.05 mm. Also, the system can successfully detect the volume of the damaged part, showing that the maximum damage area of the anchor head was within 3% of the original design guideline. Originally, the ground adhesion to the anchor head, where the z-coordinate is highly relevant, was almost impossible to measure with the UAS 3D numerical model alone because of its blind spots. However, by applying the combined system, elevation differences between the anchor bottom and the irregular ground surface was identified so that the average value at 20 various locations was calculated for the ground adhesion. Additionally, rotation angle and displacement of the anchor head less than 1" were detected. From the observations, the validity of the 3D numerical model can obtain quantitative data on anchor damage. Such data collection can potentially create a database that could be used as a fundamental resource for quantitative anchor damage evaluation in the future.