• Title/Summary/Keyword: acquisition probability

Search Result 147, Processing Time 0.03 seconds

Problem space based search algorithm for manufacturing process with rework probabilities affecting product quality and tardiness (Rework 확률이 제품의 품질과 납기준수에 영향을 주는 공정을 위한 문제공간기반 탐색 알고리즘)

  • Kang, Yong-Ha;Lee, Young-Sup;Shin, Hyun-Joon
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.10 no.7
    • /
    • pp.1702-1710
    • /
    • 2009
  • In this paper, we propose a problem space based search(PSBS) algorithm to solve parallel machine scheduling problem considering rework probabilities. For each pair of a machine and a job type, rework probability of each job on a machine can be known through historical data acquisition. Neighborhoods are generated by perturbing four problem data vectors (processing times, due dates, setup times, and rework probabilities) and evaluated through the efficient dispatching heuristic (EDDR). The proposed algorithm is measured by maximum lateness and the number of reworked jobs. We show that the PSBS algorithm is considerably improved from the result obtained by EDDR.

Securing a Cyber Physical System in Nuclear Power Plants Using Least Square Approximation and Computational Geometric Approach

  • Gawand, Hemangi Laxman;Bhattacharjee, A.K.;Roy, Kallol
    • Nuclear Engineering and Technology
    • /
    • v.49 no.3
    • /
    • pp.484-494
    • /
    • 2017
  • In industrial plants such as nuclear power plants, system operations are performed by embedded controllers orchestrated by Supervisory Control and Data Acquisition (SCADA) software. A targeted attack (also termed a control aware attack) on the controller/SCADA software can lead a control system to operate in an unsafe mode or sometimes to complete shutdown of the plant. Such malware attacks can result in tremendous cost to the organization for recovery, cleanup, and maintenance activity. SCADA systems in operational mode generate huge log files. These files are useful in analysis of the plant behavior and diagnostics during an ongoing attack. However, they are bulky and difficult for manual inspection. Data mining techniques such as least squares approximation and computational methods can be used in the analysis of logs and to take proactive actions when required. This paper explores methodologies and algorithms so as to develop an effective monitoring scheme against control aware cyber attacks. It also explains soft computation techniques such as the computational geometric method and least squares approximation that can be effective in monitor design. This paper provides insights into diagnostic monitoring of its effectiveness by attack simulations on a four-tank model and using computation techniques to diagnose it. Cyber security of instrumentation and control systems used in nuclear power plants is of paramount importance and hence could be a possible target of such applications.

An Optimization Method for the Calculation of SCADA Main Grid's Theoretical Line Loss Based on DBSCAN

  • Cao, Hongyi;Ren, Qiaomu;Zou, Xiuguo;Zhang, Shuaitang;Qian, Yan
    • Journal of Information Processing Systems
    • /
    • v.15 no.5
    • /
    • pp.1156-1170
    • /
    • 2019
  • In recent years, the problem of data drifted of the smart grid due to manual operation has been widely studied by researchers in the related domain areas. It has become an important research topic to effectively and reliably find the reasonable data needed in the Supervisory Control and Data Acquisition (SCADA) system has become an important research topic. This paper analyzes the data composition of the smart grid, and explains the power model in two smart grid applications, followed by an analysis on the application of each parameter in density-based spatial clustering of applications with noise (DBSCAN) algorithm. Then a comparison is carried out for the processing effects of the boxplot method, probability weight analysis method and DBSCAN clustering algorithm on the big data driven power grid. According to the comparison results, the performance of the DBSCAN algorithm outperforming other methods in processing effect. The experimental verification shows that the DBSCAN clustering algorithm can effectively screen the power grid data, thereby significantly improving the accuracy and reliability of the calculation result of the main grid's theoretical line loss.

Evaluating Conversion Rate from Advertising in Social Media using Big Data Clustering

  • Alyoubi, Khaled H.;Alotaibi, Fahd S.
    • International Journal of Computer Science & Network Security
    • /
    • v.21 no.7
    • /
    • pp.305-316
    • /
    • 2021
  • The objective is to recognize the better opportunities from targeted reveal advertising, to show a banner ad to the consumer of online who is most expected to obtain a preferred action like signing up for a newsletter or buying a product. Discovering the most excellent commercial impression, it means the chance to exhibit an advertisement to a consumer needs the capability to calculate the probability that the consumer who perceives the advertisement on the users browser will acquire an accomplishment, that is the consumer will convert. On the other hand, conversion possibility assessment is a demanding process since there is tremendous data growth across different information dimensions and the adaptation event occurs infrequently. Retailers and manufacturers extensively employ the retail services from internet as part of a multichannel distribution and promotion strategy. The rate at which web site visitors transfer to consumers is low for online retail, out coming in high customer acquisition expenses. Approximately 96 percent of web site users concluded exclusive of no shopper purchase[1].This category of conversion rate is collected from the advertising of social media sites and pages that dataset must be estimating and assessing with the concept of big data clustering, which is used to group the particular age group of people along with their behavior. This makes to identify the proper consumer of the production which leads to improve the profitability of the concern.

Process Fault Probability Generation via ARIMA Time Series Modeling of Etch Tool Data

  • Arshad, Muhammad Zeeshan;Nawaz, Javeria;Park, Jin-Su;Shin, Sung-Won;Hong, Sang-Jeen
    • Proceedings of the Korean Vacuum Society Conference
    • /
    • 2012.02a
    • /
    • pp.241-241
    • /
    • 2012
  • Semiconductor industry has been taking the advantage of improvements in process technology in order to maintain reduced device geometries and stringent performance specifications. This results in semiconductor manufacturing processes became hundreds in sequence, it is continuously expected to be increased. This may in turn reduce the yield. With a large amount of investment at stake, this motivates tighter process control and fault diagnosis. The continuous improvement in semiconductor industry demands advancements in process control and monitoring to the same degree. Any fault in the process must be detected and classified with a high degree of precision, and it is desired to be diagnosed if possible. The detected abnormality in the system is then classified to locate the source of the variation. The performance of a fault detection system is directly reflected in the yield. Therefore a highly capable fault detection system is always desirable. In this research, time series modeling of the data from an etch equipment has been investigated for the ultimate purpose of fault diagnosis. The tool data consisted of number of different parameters each being recorded at fixed time points. As the data had been collected for a number of runs, it was not synchronized due to variable delays and offsets in data acquisition system and networks. The data was then synchronized using a variant of Dynamic Time Warping (DTW) algorithm. The AutoRegressive Integrated Moving Average (ARIMA) model was then applied on the synchronized data. The ARIMA model combines both the Autoregressive model and the Moving Average model to relate the present value of the time series to its past values. As the new values of parameters are received from the equipment, the model uses them and the previous ones to provide predictions of one step ahead for each parameter. The statistical comparison of these predictions with the actual values, gives us the each parameter's probability of fault, at each time point and (once a run gets finished) for each run. This work will be extended by applying a suitable probability generating function and combining the probabilities of different parameters using Dempster-Shafer Theory (DST). DST provides a way to combine evidence that is available from different sources and gives a joint degree of belief in a hypothesis. This will give us a combined belief of fault in the process with a high precision.

  • PDF

A Study on Flexibility Acquisition Method for VLCC Shaft System (VLCC 축계 시스템의 유연성 확보 방안에 관한 연구)

  • Shin, Sang-Hoon;Ko, Dae-Eun
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.18 no.12
    • /
    • pp.135-139
    • /
    • 2017
  • The main reason for heat accidents occurring at the after stern tube bearing (STB) is excessive local pressure caused by the deflection of the propulsion shaft due to propeller loads. The probability of a heat accident is increased by the low flexibility of the shaft system in very large crude oil carriers (VLCCs) as the engine power and shaft diameter increase and the distance decreases between the forward and after STBs. This study proposed shaft system with only an after STB and no forward STB for a flexibility acquisition method for a VLCC shaft system under hull deformation. A Hertzian contact condition was applied, which assumes a half-elliptical pressure distribution along the contact width for the calculation of the local squeeze pressure. The propeller loads, heat effect, and hull deflection under engine operating conditions are also considered. The results show that the required design criteria were satisfied by building a partial slope at the white metal, which is the material at the axial contact side in the after STB. This system could reduce building cost by simplification of the shaft system.

Analyzing Factors Affecting the Use of Landowner's Purchase Requisition Policy in Bukhansan National Park (북한산국립공원 내 토지매수 청구 제도 활용 요인 분석)

  • Chan Yong Sung;Young Jae Yi
    • Korean Journal of Environment and Ecology
    • /
    • v.37 no.6
    • /
    • pp.499-507
    • /
    • 2023
  • This study conducted an empirical analysis on a land purchase requisition policy in Bukhansan National Park to draw the efficacy, limitations and implications of this policy. A logistic regression analysis was conducted to identify factors that affected the landowners' decision on applying for land purchase requisition using the government's records on acquisition of private lands in the park since 2006 when this policy began to be implemented. Results illustrate that the probability that a landowner applied for purchase requisition increased if the land was classified as forest, if a large proportion of the land was designated as the nature conservation district, if it was located farther from park boundary, and if it had higher appraised value per square meter. These results indicate that as the landowners had less chance to utilize their lands, they more likely apply for purchase requisition. These results also imply that the government can achieve a high conservation performance level if private lands are acquire by the land acquisition requisition policy. The logistic regression model also predict that 401m2 of the private lands in Bukhansan National Park will likely be purchase-requested in future. Despites its usefulness in mitigating landowners' complaints in national parks, the land purchase requisition policy has not been widely utilized. Based on these empirical results, this study provides policy implications to facilitate the ulitization of this policy.

Robust Maneuvering Target Tracking Applying the Concept of Multiple Model Filter and the Fusion of Multi-Sensor (다중센서 융합 및 다수모델 필터 개념을 적용한 강인한 기동물체 추적)

  • Hyun, Dae-Hwan;Yoon, Hee-Byung
    • Journal of Intelligence and Information Systems
    • /
    • v.15 no.1
    • /
    • pp.51-64
    • /
    • 2009
  • A location tracking sensor such as GPS, INS, Radar, and optical equipments is used in tracking Maneuvering Targets with a multi-sensor, and such systems are used to track, detect, and control UAV, guided missile, and spaceship. Until now, Most of the studies related to tracking Maneuvering Targets are on fusing multiple Radars, or adding a supplementary sensor to INS and GPS. However, A study is required to change the degree of application in fusions since the system property and error property are different from sensors. In this paper, we perform the error analysis of the sensor properties by adding a ground radar to GPS and INS for improving the tracking performance by multi-sensor fusion, and suggest the tracking algorithm that improves the precision and stability by changing the sensor probability of each sensor according to the error. For evaluation, we extract the altitude values in a simulation for the trajectory of UAV and apply the suggested algorithm to carry out the performance analysis. In this study, we change the weight of the evaluated values according to the degree of error between the navigation information of each sensor to improve the precision of navigation information, and made it possible to have a strong tracking which is not affected by external purposed environmental change and disturbance.

  • PDF

Weighting Effect on the Weighted Mean in Finite Population (유한모집단에서 가중평균에 포함된 가중치의 효과)

  • Kim, Kyu-Seong
    • Survey Research
    • /
    • v.7 no.2
    • /
    • pp.53-69
    • /
    • 2006
  • Weights can be made and imposed in both sample design stage and analysis stage in a sample survey. While in design stage weights are related with sample data acquisition quantities such as sample selection probability and response rate, in analysis stage weights are connected with external quantities, for instance population quantities and some auxiliary information. The final weight is the product of all weights in both stage. In the present paper, we focus on the weight in analysis stage and investigate the effect of such weights imposed on the weighted mean when estimating the population mean. We consider a finite population with a pair of fixed survey value and weight in each unit, and suppose equal selection probability designs. Under the condition we derive the formulas of the bias as well as mean square error of the weighted mean and show that the weighted mean is biased and the direction and amount of the bias can be explained by the correlation between survey variate and weight: if the correlation coefficient is positive, then the weighted mein over-estimates the population mean, on the other hand, if negative, then under-estimates. Also the magnitude of bias is getting larger when the correlation coefficient is getting greater. In addition to theoretical derivation about the weighted mean, we conduct a simulation study to show quantities of the bias and mean square errors numerically. In the simulation, nine weights having correlation coefficient with survey variate from -0.2 to 0.6 are generated and four sample sizes from 100 to 400 are considered and then biases and mean square errors are calculated in each case. As a result, in the case or 400 sample size and 0.55 correlation coefficient, the amount or squared bias of the weighted mean occupies up to 82% among mean square error, which says the weighted mean might be biased very seriously in some cases.

  • PDF

Computed Tomographic Evaluation of Three Canine Patients with Head Trauma (개에서 컴퓨터단층촬영을 이용한 두부 외상의 평가 3례)

  • Kim, Tae-Hun;Kim, Ju-Hyung;Cho, Hang-Myo;Cheon, Haeng-Bok;Kang, Ji-Houn;Na, Ki-Jeong;Mo, In-Pil;Lee, Young-Won;Choi, Ho-Jung;Kim, Gon-Hyung;Chang, Dong-Woo
    • Journal of Veterinary Clinics
    • /
    • v.24 no.4
    • /
    • pp.667-672
    • /
    • 2007
  • This report describes the use of conventional computed tomography(CT) for the diagnosis of head trauma in three canine patients. According to physical and neurologic examinations, survey radiography and computed tomography, these patients were diagnosed as traumatic brain injury. Especially, CT is the imaging modality of first choice for head trauma patients. It provides rapid acquisition of images, superior bone detail, and better visualization of acute hemorrhage than magnetic resonance imaging. It is also less expensive and more readily available. Pre-contrast computed tomography was used to image the head. Then, post-contrast CT was performed using the same technique. The Modified Glasgow Coma Scale(MGCS) score was used to predict their probability of survival rate after head trauma in these dogs. Computed tomogram showed fluid filled tympanic bulla, fracture of the left temporal bone and cerebral parenchymal hemorrhage with post contrast ring enhancement. However, in one case, computed tomographic examination didn't delineate cerebellar parenchymal hemorrhage, which was found at postmortem examination. Treatments for patients placed in intensive care were focused to maintain cerebral perfusion pressure and to normalize intracranial pressure. In these cases, diagnostic computed tomography was a useful procedure. It revealed accurate location of the hemorrhage lesion.