• Title/Summary/Keyword: Variable Threshold Level

Search Result 43, Processing Time 0.026 seconds

Standardization for basic association measures in association rule mining (연관 규칙 마이닝에서의 평가기준 표준화 방안)

  • Park, Hee-Chang
    • Journal of the Korean Data and Information Science Society
    • /
    • v.21 no.5
    • /
    • pp.891-899
    • /
    • 2010
  • Association rule is the technique to represent the relationship between two or more items by numerical representing for the relevance of each item in vast amounts of databases, and is most being used in data mining. The basic thresholds for association rule are support, confidence, and lift. these are used to generate the association rules. We need standardization of lift because the range of lift value is different from that of support and confidence. And also we need standardization of support and confidence to compare objectively association level of antecedent variables for one descendant variable. In this paper we propose a method for standardization of association thresholds considering marginal probability for each item to grasp objectively and exactly association level, check the conditions for association criteria and then compare association thresholds with standardized association thresholds using some concrete examples.

Random Noise Addition for Detecting Adversarially Generated Image Dataset (임의의 잡음 신호 추가를 활용한 적대적으로 생성된 이미지 데이터셋 탐지 방안에 대한 연구)

  • Hwang, Jeonghwan;Yoon, Ji Won
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.12 no.6
    • /
    • pp.629-635
    • /
    • 2019
  • In Deep Learning models derivative is implemented by error back-propagation which enables the model to learn the error and update parameters. It can find the global (or local) optimal points of parameters even in the complex models taking advantage of a huge improvement in computing power. However, deliberately generated data points can 'fool' models and degrade the performance such as prediction accuracy. Not only these adversarial examples reduce the performance but also these examples are not easily detectable with human's eyes. In this work, we propose the method to detect adversarial datasets with random noise addition. We exploit the fact that when random noise is added, prediction accuracy of non-adversarial dataset remains almost unchanged, but that of adversarial dataset changes. We set attack methods (FGSM, Saliency Map) and noise level (0-19 with max pixel value 255) as independent variables and difference of prediction accuracy when noise was added as dependent variable in a simulation experiment. We have succeeded in extracting the threshold that separates non-adversarial and adversarial dataset. We detected the adversarial dataset using this threshold.

The Study about influence of immersiveness on PPL advertising in on-line game (몰입 정도가 온라인 게임 내 PPL 인지에 미치는 영향에 대한 연구)

  • Park, Seong-Min;Ryu, Seoung-Ho
    • Journal of Korea Game Society
    • /
    • v.6 no.3
    • /
    • pp.67-76
    • /
    • 2006
  • This study is to examine relationship between immersiveness aspect and the recognition of garners who are exposed to PPL (Product Placement Advertisement) in online game. The frequency of exposure, the placement type of PPL, the perception under the threshold of consciousness and the interest at the product and knowledge about PPL have been widely known as the factors of effectiveness in PPL advertising. In this study, 'immersiveness' is newly introduced to investigate the effectiveness of PPL advertising due to the unique characteristics of game which easily leads to immersion. Control group is exposed to movie clip and experimental group to lacing online game. In conclusion, the deep immersion has reduced the perception level of user on PPL except the variable of PPL placement. This study suggests that how to design the PPL placement in game for enhancing marketing effect is all the more vital, in spite of the result that the player in high immersion cannot distinguish different images.

  • PDF

A Study on the Improvement of Plastic Boat Manufacturing Process Using TOC & Statistical Analysis (TOC와 통계적 분석에 의한 플라스틱보트 제조공정 개선에 관한 연구)

  • Yoon, Gun-Gu;Kim, Tae-Gu;Lee, Dong-Hyung
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.39 no.1
    • /
    • pp.130-139
    • /
    • 2016
  • The purpose of this paper is to analyze the problems and the sources of defective products and draw improvement plans in a small plastic boat manufacturing process using TOC (Theory Of Constraints) and statistical analysis. TOC is a methodology to present a scheme for optimization of production process by finding the CCR (Capacity Constraints Resource) in the organization or the all production process through the concentration improvement activity. In this paper, we found and reformed constraints and bottlenecks in plastic boat manufacturing process in the target company for less defect ratio and production cost by applying DBR (Drum, Buffer, Rope) scheduling. And we set the threshold values for the critical process variables using statistical analysis. The result can be summarized as follows. First, CCRs in inventory control, material mix, and oven setting were found and solutions were suggested by applying DBR method. Second, the logical thinking process was utilized to find core conflict factors and draw solutions. Third, to specify the solution plan, experiment data were statistically analyzed. Data were collected from the daily journal addressing the details of 96 products such as temperature, humidity, duration and temperature of heating process, rotation speed, duration time of cooling, and the temperature of removal process. Basic statistics and logistic regression analysis were conducted with the defection as the dependent variable. Finally, critical values for major processes were proposed based on the analysis. This paper has a practical importance in contribution to the quality level of the target company through theoretical approach, TOC, and statistical analysis. However, limited number of data might depreciate the significance of the analysis and therefore it will be interesting further research direction to specify the significant manufacturing conditions across different products and processes.

A Parallel Equalization Algorithm with Weighted Updating by Two Error Estimation Functions (두 오차 추정 함수에 의해 가중 갱신되는 병렬 등화 알고리즘)

  • Oh, Kil-Nam
    • Journal of the Institute of Electronics Engineers of Korea TC
    • /
    • v.49 no.7
    • /
    • pp.32-38
    • /
    • 2012
  • In this paper, to eliminate intersymbol interference of the received signal due to multipath propagation, a parallel equalization algorithm using two error estimation functions is proposed. In the proposed algorithm, multilevel two-dimensional signals are considered as equivalent binary signals, then error signals are estimated using the sigmoid nonlinearity effective at the initial phase equalization and threshold nonlinearity with high steady-state performance. The two errors are scaled by a weight depending on the relative accuracy of the two error estimations, then two filters are updated differentially. As a result, the combined output of two filters was to be the optimum value, fast convergence at initial stage of equalization and low steady-state error level were achieved at the same time thanks to the combining effect of two operation modes smoothly. Usefulness of the proposed algorithm was verified and compared with the conventional method through computer simulations.

Variables that Affect Selective Optimization with Compensation (SOC) for Successful Aging Among Middle-Class Elderly (성공적인 노화를 위한 선택.적정화.보상책략 관련 변인 연구 -중산층 노인을 중심으로-)

  • 하정연;오윤자
    • Journal of Families and Better Life
    • /
    • v.21 no.2
    • /
    • pp.131-144
    • /
    • 2003
  • Selective Optimization with Compensation (SOC), a concept defined by Baltes and Baltes, is known to predict successful aging. This study was conducted to find out which factors affect Korean elderly people SOC The data for this study were obtained from a survey conducted between March and May 2001, on a sample of middle-class male and female participants over 60 years old. Two hundred and fifty four completed questionnaires were used for final analyses. Descriptive statistics, t-test, ANOVA, Duncan test, Pearson correlations, multiple regressions, multiple response frequencies and sequential threshold methods were used to analyze the data. In order to measure successful aging, the Selective Optimization with Compensation Scale developed by Baltes, Baltes, Freud, and Lang (1996) was used. The SOC scale consists of four subscales, Elective Selection, Loss-based Selection, Optimization, and Compensation. The major findings are summarized in the following. First, the level of SOC by various socio-demographic variables was examined. It tuned out that health status is the most important variable in predicting SOC. Also important was satisfaction with family life. Second, significant correlations were found between SOC and duration of the marriage (negative), practicing a religion, health, and economic stability (all positive). Third, religion and health status affected SOC, but health was a stronger predictor Those who practiced a religion and were healthy had a higher score in SOC as a whole. Fourth, the participants were divided into three groups by their SOC score, and their idea.; of successful aging were compared. The top- and middle-score groups considered satisfaction with family life to be more important, whereas the bottom-score group regarded the social status as more important.

Proposal and Application of Water Deficit-Duration-Frequency Curve using Threshold Level Method (임계수준 방법을 이용한 물 부족량-지속기간-빈도 곡선의 제안 및 적용)

  • Sung, Jang Hyun;Chung, Eun-Sung
    • Journal of Korea Water Resources Association
    • /
    • v.47 no.11
    • /
    • pp.997-1005
    • /
    • 2014
  • This study evaluated hydrological drought the using the annual minimum flow and the annual maximum deficit method and proposed the new concept of water deficit-duration-frequency curves similar to rainfall intensity-duration-frequency curves. The analysis results of the annual minimum flow, the return periods of hydrological drought in the most duration of 1989 and 1996yr were the longest. The analysis results of the annual maximum deficit, the return periods of 60-days and 90-day deficit which are relatively short duration were the longest in 1995yr, about 35-year, Hydrological drought lasted longer was in 1995, the return period was about 20-year. Though duration as well as magnitude is a key variable in drought analysis, it was found that the method using the annual minimum flow duration not distinguish duration.

Adaptive Congestion Control for Effective Data Transmission in Wireless Sensor Networks (센서네트워크에서의 효율적인 데이터 전송을 위한 적응적 혼잡 제어)

  • Lee, Joa-Hyoung;Gim, Dong-Gug;Jung, In-Bum
    • The KIPS Transactions:PartC
    • /
    • v.16C no.2
    • /
    • pp.237-244
    • /
    • 2009
  • The congestion in wireless sensor network increases the ratio of data loss and causes the delay of data. The existing congestion protocols for wireless sensor network reduces the amount of transmission by control the sampling frequency of the sensor nodes related to the congestion when the congestion has occurred and was detected. However, the control method of sampling frequency is not applicable on the situation which is sensitive to the temporal data loss. In the paper, we propose a new congestion control, ACT - Adaptive Congestion conTrol. The ACT monitors the network traffic with the queue usage and detects the congestion based on the multi level threshold of queue usage. Given network congestion, the ACT increases the efficiency of network by adaptive flow control method which adjusts the frequency of packet transmission and guarantees the fairness of packet transmission between nodes. Furthermore, ACT increases the quality of data by using the variable compression method. Through experiment, we show that ACT increases the network efficiency and guarantees the fairness to sensor nodes compared with existing method.

Relationships on Magnitude and Frequency of Freshwater Discharge and Rainfall in the Altered Yeongsan Estuary (영산강 하구의 방류와 강우의 규모 및 빈도 상관성 분석)

  • Rhew, Ho-Sang;Lee, Guan-Hong
    • The Sea:JOURNAL OF THE KOREAN SOCIETY OF OCEANOGRAPHY
    • /
    • v.16 no.4
    • /
    • pp.223-237
    • /
    • 2011
  • The intermittent freshwater discharge has an critical influence upon the biophysical environments and the ecosystems of the Yeongsan Estuary where the estuary dam altered the continuous mixing of saltwater and freshwater. Though freshwater discharge is controlled by human, the extreme events are mainly driven by the heavy rainfall in the river basin, and provide various impacts, depending on its magnitude and frequency. This research aims to evaluate the magnitude and frequency of extreme freshwater discharges, and to establish the magnitude-frequency relationships between basin-wide rainfall and freshwater inflow. Daily discharge and daily basin-averaged rainfall from Jan 1, 1997 to Aug 31, 2010 were used to determine the relations between discharge and rainfall. Consecutive daily discharges were grouped into independent events using well-defined event-separation algorithm. Partial duration series were extracted to obtain the proper probability distribution function for extreme discharges and corresponding rainfall events. Extreme discharge events over the threshold 133,656,000 $m^3$ count up to 46 for 13.7y years, following the Weibull distribution with k=1.4. The 3-day accumulated rain-falls which occurred one day before peak discharges (1day-before-3day -sum rainfall), are determined as a control variable for discharge, because their magnitude is best correlated with that of the extreme discharge events. The minimum value of the corresponding 1day-before-3day-sum rainfall, 50.98mm is initially set to a threshold for the selection of discharge-inducing rainfall cases. The number of 1day-before-3day-sum rainfall groups after selection, however, exceeds that of the extreme discharge events. The canonical discriminant analysis indicates that water level over target level (-1.35 m EL.) can be useful to divide the 1day-before-3day-sum rainfall groups into discharge-induced and non-discharge ones. It also shows that the newly-set threshold, 104mm, can just separate these two cases without errors. The magnitude-frequency relationships between rainfall and discharge are established with the newly-selected lday-before-3day-sum rainfalls: $D=1.111{\times}10^8+1.677{\times}10^6{\overline{r_{3day}}$, (${\overline{r_{3day}}{\geqq}104$, $R^2=0.459$), $T_d=1.326T^{0.683}_{r3}$, $T_d=0.117{\exp}[0.0155{\overline{r_{3day}}]$, where D is the quantity of discharge, ${\overline{r_{3day}}$ the 1day-before-3day-sum rainfall, $T_{r3}$ and $T_d$, are respectively return periods of 1day-before-3day-sum rainfall and freshwater discharge. These relations provide the framework to evaluate the effect of freshwater discharge on estuarine flow structure, water quality, responses of ecosystems from the perspective of magnitude and frequency.

Exposure status of welding fumes for operators of overhead traveling crane in a shipyard (대형조선소 천장크레인 운전원의 용접흄 노출 실태)

  • Lee, Kyeongmin;Kim, Boowook;Kwak, Hyunseok;Ha, Hyunchul
    • Journal of Korean Society of Occupational and Environmental Hygiene
    • /
    • v.25 no.3
    • /
    • pp.301-311
    • /
    • 2015
  • Objectives: Operators of overhead traveling crane in a ship assembly factory perform work to transmit large vessel blocks to an appropriate working process. Hazardous matters such as metal dusts, carbon monoxide, carbon dioxide, ozone, loud noise and fine particles are generated by variable working activities in the factory. The operators could be exposed to the hazardous matters during the work. In particular, welding fumes comprised of ultra fine particles and heavy metals is extremely hazardous for humans when exposing a pulmonary through respiratory pathway. Occupational lung diseases related to welding fumes are increasingly on an upward tendency. Therefore, the objective of this study is to assess properly unknown occupational exposure to the welding fumes among the operators. Methods: This study intended to clearly determine an equivalence check whether or not chemical constituents and composition of the dusts, which existed in the driver's cab, matched up with generally known welding fumes. Furthermore, computational fluid dynamics program(CFD) was used to identify a ventilation assessment in respect of a contamination distribution of welding fumes in the air. The operators were investigated to assess personal exposure levels of welding fumes and respirable particulate. Results: The dust in an operation room were the same constituents and composition as welding fumes. Welding fumes, which caused by the welding in a floor of the factory, arose with an ascending air current up to a roof and then stayed for a long time. They were considered to be exposed to the welding fumes in the operation room. The personal exposure levels of welding fumes and respirable particulate were 0.159(n=8, range=0.073-0.410) $mg/m^3$ and 0.138(n=8, range=0.087-0.178) $mg/m^3$, respectively. They were lower than a threshold limit value level($5mg/m^3$) of welding fumes. Conclusions: These findings indicate that an occupational exposure to welding fumes can exist among the operators. Consequently, we need to be keeping the operators under a constant assessment in the operator process of overhead traveling crane.