• Title/Summary/Keyword: operating algorithm

Search Result 1,724, Processing Time 0.029 seconds

Development of algorithm for work intensity evaluation using excess overwork index of construction workers with real-time heart rate measurement device

  • Jae-young Park;Jung Hwan Lee;Mo-Yeol Kang;Tae-Won Jang;Hyoung-Ryoul Kim;Se-Yeong Kim;Jongin Lee
    • Annals of Occupational and Environmental Medicine
    • /
    • v.35
    • /
    • pp.24.1-24.15
    • /
    • 2023
  • Background: The construction workers are vulnerable to fatigue due to high physical workload. This study aimed to investigate the relationship between overwork and heart rate in construction workers and propose a scheme to prevent overwork in advance. Methods: We measured the heart rates of construction workers at a construction site of a residential and commercial complex in Seoul from August to October 2021 and develop an index that monitors overwork in real-time. A total of 66 Korean workers participated in the study, wearing real-time heart rate monitoring equipment. The relative heart rate (RHR) was calculated using the minimum and maximum heart rates, and the maximum acceptable working time (MAWT) was estimated using RHR to calculate the workload. The overwork index (OI) was defined as the cumulative workload evaluated with the MAWT. An appropriate scenario line (PSL) was set as an index that can be compared to the OI to evaluate the degree of overwork in real-time. The excess overwork index (EOI) was evaluated in real-time during work performance using the difference between the OI and the PSL. The EOI value was used to perform receiver operating characteristic (ROC) curve analysis to find the optimal cut-off value for classification of overwork state. Results: Of the 60 participants analyzed, 28 (46.7%) were classified as the overwork group based on their RHR. ROC curve analysis showed that the EOI was a good predictor of overwork, with an area under the curve of 0.824. The optimal cut-off values ranged from 21.8% to 24.0% depending on the method used to determine the cut-off point. Conclusion: The EOI showed promising results as a predictive tool to assess overwork in real-time using heart rate monitoring and calculation through MAWT. Further research is needed to assess physical workload accurately and determine cut-off values across industries.

CT-Based Radiomics Signature for Preoperative Prediction of Coagulative Necrosis in Clear Cell Renal Cell Carcinoma

  • Kai Xu;Lin Liu;Wenhui Li;Xiaoqing Sun;Tongxu Shen;Feng Pan;Yuqing Jiang;Yan Guo;Lei Ding;Mengchao Zhang
    • Korean Journal of Radiology
    • /
    • v.21 no.6
    • /
    • pp.670-683
    • /
    • 2020
  • Objective: The presence of coagulative necrosis (CN) in clear cell renal cell carcinoma (ccRCC) indicates a poor prognosis, while the absence of CN indicates a good prognosis. The purpose of this study was to build and validate a radiomics signature based on preoperative CT imaging data to estimate CN status in ccRCC. Materials and Methods: Altogether, 105 patients with pathologically confirmed ccRCC were retrospectively enrolled in this study and then divided into training (n = 72) and validation (n = 33) sets. Thereafter, 385 radiomics features were extracted from the three-dimensional volumes of interest of each tumor, and 10 traditional features were assessed by two experienced radiologists using triple-phase CT-enhanced images. A multivariate logistic regression algorithm was used to build the radiomics score and traditional predictors in the training set, and their performance was assessed and then tested in the validation set. The radiomics signature to distinguish CN status was then developed by incorporating the radiomics score and the selected traditional predictors. The receiver operating characteristic (ROC) curve was plotted to evaluate the predictive performance. Results: The area under the ROC curve (AUC) of the radiomics score, which consisted of 7 radiomics features, was 0.855 in the training set and 0.885 in the validation set. The AUC of the traditional predictor, which consisted of 2 traditional features, was 0.843 in the training set and 0.858 in the validation set. The radiomics signature showed the best performance with an AUC of 0.942 in the training set, which was then confirmed with an AUC of 0.969 in the validation set. Conclusion: The CT-based radiomics signature that incorporated radiomics and traditional features has the potential to be used as a non-invasive tool for preoperative prediction of CN in ccRCC.

A preliminary study for development of an automatic incident detection system on CCTV in tunnels based on a machine learning algorithm (기계학습(machine learning) 기반 터널 영상유고 자동 감지 시스템 개발을 위한 사전검토 연구)

  • Shin, Hyu-Soung;Kim, Dong-Gyou;Yim, Min-Jin;Lee, Kyu-Beom;Oh, Young-Sup
    • Journal of Korean Tunnelling and Underground Space Association
    • /
    • v.19 no.1
    • /
    • pp.95-107
    • /
    • 2017
  • In this study, a preliminary study was undertaken for development of a tunnel incident automatic detection system based on a machine learning algorithm which is to detect a number of incidents taking place in tunnel in real time and also to be able to identify the type of incident. Two road sites where CCTVs are operating have been selected and a part of CCTV images are treated to produce sets of training data. The data sets are composed of position and time information of moving objects on CCTV screen which are extracted by initially detecting and tracking of incoming objects into CCTV screen by using a conventional image processing technique available in this study. And the data sets are matched with 6 categories of events such as lane change, stoping, etc which are also involved in the training data sets. The training data are learnt by a resilience neural network where two hidden layers are applied and 9 architectural models are set up for parametric studies, from which the architectural model, 300(first hidden layer)-150(second hidden layer) is found to be optimum in highest accuracy with respect to training data as well as testing data not used for training. From this study, it was shown that the highly variable and complex traffic and incident features could be well identified without any definition of feature regulation by using a concept of machine learning. In addition, detection capability and accuracy of the machine learning based system will be automatically enhanced as much as big data of CCTV images in tunnel becomes rich.

A SOC Coefficient Factor Calibration Method to improve accuracy Of The Lithium Battery Equivalence Model (리튬 배터리 등가모델의 정확도 개선을 위한 SOC 계수 보정법)

  • Lee, Dae-Gun;Jung, Won-Jae;Jang, Jong-Eun;Park, Jun-Seok
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.54 no.4
    • /
    • pp.99-107
    • /
    • 2017
  • This paper proposes a battery model coefficient correction method for improving the accuracy of existing lithium battery equivalent models. BMS(battery management system) has been researched and developed to minimize shortening of battery life by keeping SOC(state of charge) and state of charge of lithium battery used in various industrial fields such as EV. However, the cell balancing operation based on the battery cell voltage can not follow the SOC change due to the internal resistance and the capacitor. Various battery equivalent models have been studied for estimation of battery SOC according to the internal resistance of the battery and capacitors. However, it is difficult to apply the same to all the batteries, and it tis difficult to estimate the battery state in the transient state. The existing battery electrical equivalent model study simulates charging and discharging dynamic characteristics of one kind of battery with error rate of 5~10% and it is not suitable to apply to actual battery having different electric characteristics. Therefore, this paper proposes a battery model coefficient correction algorithm that is suitable for real battery operating environments with different models and capacities, and can simulate dynamic characteristics with an error rate of less than 5%. To verify proposed battery model coefficient calibration method, a lithium battery of 3.7V rated voltage, 280 mAh, 1600 mAh capacity used, and a two stage RC tank model was used as an electrical equivalent model of a lithium battery. The battery charge/discharge test and model verification were performed using four C-rate of 0.25C, 0.5C, 0.75C, and 1C. The proposed battery model coefficient correction algorithm was applied to two battery models, The error rate of the discharge characteristics and the transient state characteristics is 2.13% at the maximum.

Three-Dimensional High-Frequency Electromagnetic Modeling Using Vector Finite Elements (벡터 유한 요소를 이용한 고주파 3차원 전자탐사 모델링)

  • Son Jeong-Sul;Song Yoonho;Chung Seung-Hwan;Suh Jung Hee
    • Geophysics and Geophysical Exploration
    • /
    • v.5 no.4
    • /
    • pp.280-290
    • /
    • 2002
  • Three-dimensional (3-D) electromagnetic (EM) modeling algorithm has been developed using finite element method (FEM) to acquire more efficient interpretation techniques of EM data. When FEM based on nodal elements is applied to EM problem, spurious solutions, so called 'vector parasite', are occurred due to the discontinuity of normal electric fields and may lead the completely erroneous results. Among the methods curing the spurious problem, this study adopts vector element of which basis function has the amplitude and direction. To reduce computational cost and required core memory, complex bi-conjugate gradient (CBCG) method is applied to solving complex symmetric matrix of FEM and point Jacobi method is used to accelerate convergence rate. To verify the developed 3-D EM modeling algorithm, its electric and magnetic field for a layered-earth model are compared with those of layered-earth solution. As we expected, the vector based FEM developed in this study does not cause ny vector parasite problem, while conventional nodal based FEM causes lots of errors due to the discontinuity of field variables. For testing the applicability to high frequencies 100 MHz is used as an operating frequency for the layer structure. Modeled fields calculated from developed code are also well matched with the layered-earth ones for a model with dielectric anomaly as well as conductive anomaly. In a vertical electric dipole source case, however, the discontinuity of field variables causes the conventional nodal based FEM to include a lot of errors due to the vector parasite. Even for the case, the vector based FEM gave almost the same results as the layered-earth solution. The magnetic fields induced by a dielectric anomaly at high frequencies show unique behaviors different from those by a conductive anomaly. Since our 3-D EM modeling code can reflect the effect from a dielectric anomaly as well as a conductive anomaly, it may be a groundwork not only to apply high frequency EM method to the field survey but also to analyze the fold data obtained by high frequency EM method.

A Construction of TMO Object Group Model for Distributed Real-Time Services (분산 실시간 서비스를 위한 TMO 객체그룹 모델의 구축)

  • 신창선;김명희;주수종
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.30 no.5_6
    • /
    • pp.307-318
    • /
    • 2003
  • In this paper, we design and construct a TMO object group that provides the guaranteed real-time services in the distributed object computing environments, and verify execution power of its model for the correct distributed real-time services. The TMO object group we suggested is based on TINA's object group concept. This model consists of TMO objects having real-time properties and some components that support the object management service and the real-time scheduling service in the TMO object group. Also TMO objects can be duplicated or non-duplicated on distributed systems. Our model can execute the guaranteed distributed real-time service on COTS middlewares without restricting the specially ORB or the of operating system. For achieving goals of our model. we defined the concepts of the TMO object and the structure of the TMO object group. Also we designed and implemented the functions and interactions of components in the object group. The TMO object group includes the Dynamic Binder object and the Scheduler object for supporting the object management service and the real-time scheduling service, respectively The Dynamic Binder object supports the dynamic binding service that selects the appropriate one out of the duplicated TMO objects for the clients'request. And the Scheduler object supports the real-time scheduling service that determines the priority of tasks executed by an arbitrary TMO object for the clients'service requests. And then, in order to verify the executions of our model, we implemented the Dynamic Binder object and the Scheduler object adopting the binding priority algorithm for the dynamic binding service and the EDF algorithm for the real-time scheduling service from extending the existing known algorithms. Finally, from the numerical analyzed results we are shown, we verified whether our TMO object group model could support dynamic binding service for duplicated or non-duplicated TMO objects, also real-time scheduling service for an arbitrary TMO object requested from clients.

Development and assessment of pre-release discharge technology for response to flood on deteriorated reservoirs dealing with abnormal weather events (이상기후대비 노후저수지 홍수 대응을 위한 사전방류 기술개발 및 평가)

  • Moon, Soojin;Jeong, Changsam;Choi, Byounghan;Kim, Seungwook;Jang, Daewon
    • Journal of Korea Water Resources Association
    • /
    • v.56 no.11
    • /
    • pp.775-784
    • /
    • 2023
  • With the increasing trend of extreme rainfall that exceeds the design frequency of man-made structures due to extreme weather, it is necessary to review the safety of agricultural reservoirs designed in the past. However, there are no local government-managed reservoirs (13,685) that can be discharged in an emergency, except for reservoirs over a certain size under the jurisdiction of the Korea Rural Affairs Corporation. In this case, it is important to quickly deploy a mobile siphon to the site for preliminary discharge, and this study evaluated the applicability of a mobile siphon with a diameter of 200 mm, a minimum water level difference of 6 m, 420 (m2/h), and 10,000 (m2/day), which can perform both preliminary and emergency discharge functions, to the Yugum Reservoir in Gyeongju City. The test bed, Yugum Reservoir, is a facility that was completed in 1945 and has been in use for about 78 years. According to the hydrological stability analysis, the lowest height of the current dam crest section is 27.15 (EL.m), which is 0.29m lower than the reviewed flood level of 27.44 (EL.m), indicating that there is a possibility of lunar flow through the embankment, and the headroom is insufficient by 1.72 m, so it was reviewed as not securing hydrological safety. The water level-volume curve was arbitrarily derived because it was difficult to clearly establish the water level-flow relationship curve of the reservoir since the water level-flow measurement was not carried out regularly, and based on the derived curve, the algorithm for operating small and medium-sized old reservoirs was developed to consider the pre-discharge time, the amount of spillway discharge, and to predict the reservoir lunar flow time according to the flood volume by frequency, thereby securing evacuation time in advance and reducing the risk of collapse. Based on one row of 200 mm diameter mobile siphons, the optimal pre-discharge time to secure evacuation time (about 1 hour) while maintaining 80% of the upper limit water level (about 30,000 m2) during a 30-year flood was analyzed to be 12 hours earlier. If the pre-discharge technology utilizing siphons for small and medium-sized old reservoirs and the algorithm for reservoir operation are implemented in advance in case of abnormal weather and the decision-making of managers is supported, it is possible to secure the safety of residents in the risk area of reservoir collapse, resolve the anxiety of residents through the establishment of a support system for evacuating residents, and reduce risk factors by providing risk avoidance measures in the event of a reservoir risk situation.

Development and Performance Evaluation of an Animal SPECT System Using Philips ARGUS Gamma Camera and Pinhole Collimator (Philips ARGUS 감마카메라와 바늘구멍조준기를 이용한 소동물 SPECT 시스템의 개발 및 성능 평가)

  • Kim, Joong-Hyun;Lee, Jae-Sung;Kim, Jin-Su;Lee, Byeong-Il;Kim, Soo-Mee;Choung, In-Soon;Kim, Yu-Kyeong;Lee, Won-Woo;Kim, Sang-Eun;Chung, June-Key;Lee, Myung-Chul;Lee, Dong-Soo
    • The Korean Journal of Nuclear Medicine
    • /
    • v.39 no.6
    • /
    • pp.445-455
    • /
    • 2005
  • Purpose: We developed an animal SPECT system using clinical Philips ARGUS scintillation camera and pinhole collimator with specially manufactured small apertures. In this study, we evaluated the physical characteristics of this system and biological feasibility for animal experiments. Materials and Methods: Rotating station for small animals using a step motor and operating software were developed. Pinhole inserts with small apertures (diameter of 0.5, 1.0, and 2.0 mm) were manufactured and physical parameters including planar spatial resolution and sensitivity and reconstructed resolution were measured for some apertures. In order to measure the size of the usable field of view according to the distance from the focal point, manufactured multiple line sources separated with the same distance were scanned and numbers of lines within the field of view were counted. Using a Tc-99m line source with 0.5 mm diameter and 12 mm length placed in the exact center of field of view, planar spatial resolution according to the distance was measured. Calibration factor to obtain FWHM values in 'mm' unit was calculated from the planar image of two separated line sources. Te-99m point source with i mm diameter was used for the measurement of system sensitivity. In addition, SPECT data of micro phantom with cold and hot line inserts and rat brain after intravenous injection of [I-123]FP-CIT were acquired and reconstructed using filtered back protection reconstruction algorithm for pinhole collimator. Results: Size of usable field of view was proportional to the distance from the focal point and their relationship could be fitted into a linear equation (y=1.4x+0.5, x: distance). System sensitivity and planar spatial resolution at 3 cm measured using 1.0 mm aperture was 71 cps/MBq and 1.24 mm, respectively. In the SPECT image of rat brain with [I-123]FP-CIT acquired using 1.0 mm aperture, the distribution of dopamine transporter in the striatum was well identified in each hemisphere. Conclusion: We verified that this new animal SPECT system with the Phlilps ARGUS scanner and small apertures had sufficient performance for small animal imaging.

Design and Implementation of the SSL Component based on CBD (CBD에 기반한 SSL 컴포넌트의 설계 및 구현)

  • Cho Eun-Ae;Moon Chang-Joo;Baik Doo-Kwon
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.12 no.3
    • /
    • pp.192-207
    • /
    • 2006
  • Today, the SSL protocol has been used as core part in various computing environments or security systems. But, the SSL protocol has several problems, because of the rigidity on operating. First, SSL protocol brings considerable burden to the CPU utilization so that performance of the security service in encryption transaction is lowered because it encrypts all data which is transferred between a server and a client. Second, SSL protocol can be vulnerable for cryptanalysis due to the key in fixed algorithm being used. Third, it is difficult to add and use another new cryptography algorithms. Finally. it is difficult for developers to learn use cryptography API(Application Program Interface) for the SSL protocol. Hence, we need to cover these problems, and, at the same time, we need the secure and comfortable method to operate the SSL protocol and to handle the efficient data. In this paper, we propose the SSL component which is designed and implemented using CBD(Component Based Development) concept to satisfy these requirements. The SSL component provides not only data encryption services like the SSL protocol but also convenient APIs for the developer unfamiliar with security. Further, the SSL component can improve the productivity and give reduce development cost. Because the SSL component can be reused. Also, in case of that new algorithms are added or algorithms are changed, it Is compatible and easy to interlock. SSL Component works the SSL protocol service in application layer. First of all, we take out the requirements, and then, we design and implement the SSL Component, confidentiality and integrity component, which support the SSL component, dependently. These all mentioned components are implemented by EJB, it can provide the efficient data handling when data is encrypted/decrypted by choosing the data. Also, it improves the usability by choosing data and mechanism as user intend. In conclusion, as we test and evaluate these component, SSL component is more usable and efficient than existing SSL protocol, because the increase rate of processing time for SSL component is lower that SSL protocol's.

A Study of a Non-commercial 3D Planning System, Plunc for Clinical Applicability (비 상업용 3차원 치료계획시스템인 Plunc의 임상적용 가능성에 대한 연구)

  • Cho, Byung-Chul;Oh, Do-Hoon;Bae, Hoon-Sik
    • Radiation Oncology Journal
    • /
    • v.16 no.1
    • /
    • pp.71-79
    • /
    • 1998
  • Purpose : The objective of this study is to introduce our installation of a non-commercial 3D Planning system, Plunc and confirm it's clinical applicability in various treatment situations. Materials and Methods : We obtained source codes of Plunc, offered by University of North Carolina and installed them on a Pentium Pro 200MHz (128MB RAM, Millenium VGA) with Linux operating system. To examine accuracy of dose distributions calculated by Plunc, we input beam data of 6MV Photon of our linear accelerator(Siemens MXE 6740) including tissue-maximum ratio, scatter-maximum ratio, attenuation coefficients and shapes of wedge filters. After then, we compared values of dose distributions(Percent depth dose; PDD, dose profiles with and without wedge filters, oblique incident beam, and dose distributions under air-gap) calculated by Plunc with measured values. Results : Plunc operated in almost real time except spending about 10 seconds in full volume dose distribution and dose-volume histogram(DVH) on the PC described above. As compared with measurements for irradiations of 90-cm 550 and 10-cm depth isocenter, the PDD curves calculated by Plunc did not exceed $1\%$ of inaccuracies except buildup region. For dose profiles with and without wedge filter, the calculated ones are accurate within $2\%$ except low-dose region outside irradiations where Plunc showed $5\%$ of dose reduction. For the oblique incident beam, it showed a good agreement except low dose region below $30\%$ of isocenter dose. In the case of dose distribution under air-gap, there was $5\%$ errors of the central-axis dose. Conclusion : By comparing photon dose calculations using the Plunc with measurements, we confirmed that Plunc showed acceptable accuracies about $2-5\%$ in typical treatment situations which was comparable to commercial planning systems using correction-based a1gorithms. Plunc does not have a function for electron beam planning up to the present. However, it is possible to implement electron dose calculation modules or more accurate photon dose calculation into the Plunc system. Plunc is shown to be useful to clear many limitations of 2D planning systems in clinics where a commercial 3D planning system is not available.

  • PDF