• Title/Summary/Keyword: Real-time Process Management

Search Result 802, Processing Time 0.029 seconds

Prediction of Target Motion Using Neural Network for 4-dimensional Radiation Therapy (신경회로망을 이용한 4차원 방사선치료에서의 조사 표적 움직임 예측)

  • Lee, Sang-Kyung;Kim, Yong-Nam;Park, Kyung-Ran;Jeong, Kyeong-Keun;Lee, Chang-Geol;Lee, Ik-Jae;Seong, Jin-Sil;Choi, Won-Hoon;Chung, Yoon-Sun;Park, Sung-Ho
    • Progress in Medical Physics
    • /
    • v.20 no.3
    • /
    • pp.132-138
    • /
    • 2009
  • Studies on target motion in 4-dimensional radiotherapy are being world-widely conducted to enhance treatment record and protection of normal organs. Prediction of tumor motion might be very useful and/or essential for especially free-breathing system during radiation delivery such as respiratory gating system and tumor tracking system. Neural network is powerful to express a time series with nonlinearity because its prediction algorithm is not governed by statistic formula but finds a rule of data expression. This study intended to assess applicability of neural network method to predict tumor motion in 4-dimensional radiotherapy. Scaled Conjugate Gradient algorithm was employed as a learning algorithm. Considering reparation data for 10 patients, prediction by the neural network algorithms was compared with the measurement by the real-time position management (RPM) system. The results showed that the neural network algorithm has the excellent accuracy of maximum absolute error smaller than 3 mm, except for the cases in which the maximum amplitude of respiration is over the range of respiration used in the learning process of neural network. It indicates the insufficient learning of the neural network for extrapolation. The problem could be solved by acquiring a full range of respiration before learning procedure. Further works are programmed to verify a feasibility of practical application for 4-dimensional treatment system, including prediction performance according to various system latency and irregular patterns of respiration.

  • PDF

Estimation of GARCH Models and Performance Analysis of Volatility Trading System using Support Vector Regression (Support Vector Regression을 이용한 GARCH 모형의 추정과 투자전략의 성과분석)

  • Kim, Sun Woong;Choi, Heung Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.2
    • /
    • pp.107-122
    • /
    • 2017
  • Volatility in the stock market returns is a measure of investment risk. It plays a central role in portfolio optimization, asset pricing and risk management as well as most theoretical financial models. Engle(1982) presented a pioneering paper on the stock market volatility that explains the time-variant characteristics embedded in the stock market return volatility. His model, Autoregressive Conditional Heteroscedasticity (ARCH), was generalized by Bollerslev(1986) as GARCH models. Empirical studies have shown that GARCH models describes well the fat-tailed return distributions and volatility clustering phenomenon appearing in stock prices. The parameters of the GARCH models are generally estimated by the maximum likelihood estimation (MLE) based on the standard normal density. But, since 1987 Black Monday, the stock market prices have become very complex and shown a lot of noisy terms. Recent studies start to apply artificial intelligent approach in estimating the GARCH parameters as a substitute for the MLE. The paper presents SVR-based GARCH process and compares with MLE-based GARCH process to estimate the parameters of GARCH models which are known to well forecast stock market volatility. Kernel functions used in SVR estimation process are linear, polynomial and radial. We analyzed the suggested models with KOSPI 200 Index. This index is constituted by 200 blue chip stocks listed in the Korea Exchange. We sampled KOSPI 200 daily closing values from 2010 to 2015. Sample observations are 1487 days. We used 1187 days to train the suggested GARCH models and the remaining 300 days were used as testing data. First, symmetric and asymmetric GARCH models are estimated by MLE. We forecasted KOSPI 200 Index return volatility and the statistical metric MSE shows better results for the asymmetric GARCH models such as E-GARCH or GJR-GARCH. This is consistent with the documented non-normal return distribution characteristics with fat-tail and leptokurtosis. Compared with MLE estimation process, SVR-based GARCH models outperform the MLE methodology in KOSPI 200 Index return volatility forecasting. Polynomial kernel function shows exceptionally lower forecasting accuracy. We suggested Intelligent Volatility Trading System (IVTS) that utilizes the forecasted volatility results. IVTS entry rules are as follows. If forecasted tomorrow volatility will increase then buy volatility today. If forecasted tomorrow volatility will decrease then sell volatility today. If forecasted volatility direction does not change we hold the existing buy or sell positions. IVTS is assumed to buy and sell historical volatility values. This is somewhat unreal because we cannot trade historical volatility values themselves. But our simulation results are meaningful since the Korea Exchange introduced volatility futures contract that traders can trade since November 2014. The trading systems with SVR-based GARCH models show higher returns than MLE-based GARCH in the testing period. And trading profitable percentages of MLE-based GARCH IVTS models range from 47.5% to 50.0%, trading profitable percentages of SVR-based GARCH IVTS models range from 51.8% to 59.7%. MLE-based symmetric S-GARCH shows +150.2% return and SVR-based symmetric S-GARCH shows +526.4% return. MLE-based asymmetric E-GARCH shows -72% return and SVR-based asymmetric E-GARCH shows +245.6% return. MLE-based asymmetric GJR-GARCH shows -98.7% return and SVR-based asymmetric GJR-GARCH shows +126.3% return. Linear kernel function shows higher trading returns than radial kernel function. Best performance of SVR-based IVTS is +526.4% and that of MLE-based IVTS is +150.2%. SVR-based GARCH IVTS shows higher trading frequency. This study has some limitations. Our models are solely based on SVR. Other artificial intelligence models are needed to search for better performance. We do not consider costs incurred in the trading process including brokerage commissions and slippage costs. IVTS trading performance is unreal since we use historical volatility values as trading objects. The exact forecasting of stock market volatility is essential in the real trading as well as asset pricing models. Further studies on other machine learning-based GARCH models can give better information for the stock market investors.

Recommender Systems using Structural Hole and Collaborative Filtering (구조적 공백과 협업필터링을 이용한 추천시스템)

  • Kim, Mingun;Kim, Kyoung-Jae
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.4
    • /
    • pp.107-120
    • /
    • 2014
  • This study proposes a novel recommender system using the structural hole analysis to reflect qualitative and emotional information in recommendation process. Although collaborative filtering (CF) is known as the most popular recommendation algorithm, it has some limitations including scalability and sparsity problems. The scalability problem arises when the volume of users and items become quite large. It means that CF cannot scale up due to large computation time for finding neighbors from the user-item matrix as the number of users and items increases in real-world e-commerce sites. Sparsity is a common problem of most recommender systems due to the fact that users generally evaluate only a small portion of the whole items. In addition, the cold-start problem is the special case of the sparsity problem when users or items newly added to the system with no ratings at all. When the user's preference evaluation data is sparse, two users or items are unlikely to have common ratings, and finally, CF will predict ratings using a very limited number of similar users. Moreover, it may produces biased recommendations because similarity weights may be estimated using only a small portion of rating data. In this study, we suggest a novel limitation of the conventional CF. The limitation is that CF does not consider qualitative and emotional information about users in the recommendation process because it only utilizes user's preference scores of the user-item matrix. To address this novel limitation, this study proposes cluster-indexing CF model with the structural hole analysis for recommendations. In general, the structural hole means a location which connects two separate actors without any redundant connections in the network. The actor who occupies the structural hole can easily access to non-redundant, various and fresh information. Therefore, the actor who occupies the structural hole may be a important person in the focal network and he or she may be the representative person in the focal subgroup in the network. Thus, his or her characteristics may represent the general characteristics of the users in the focal subgroup. In this sense, we can distinguish friends and strangers of the focal user utilizing the structural hole analysis. This study uses the structural hole analysis to select structural holes in subgroups as an initial seeds for a cluster analysis. First, we gather data about users' preference ratings for items and their social network information. For gathering research data, we develop a data collection system. Then, we perform structural hole analysis and find structural holes of social network. Next, we use these structural holes as cluster centroids for the clustering algorithm. Finally, this study makes recommendations using CF within user's cluster, and compare the recommendation performances of comparative models. For implementing experiments of the proposed model, we composite the experimental results from two experiments. The first experiment is the structural hole analysis. For the first one, this study employs a software package for the analysis of social network data - UCINET version 6. The second one is for performing modified clustering, and CF using the result of the cluster analysis. We develop an experimental system using VBA (Visual Basic for Application) of Microsoft Excel 2007 for the second one. This study designs to analyzing clustering based on a novel similarity measure - Pearson correlation between user preference rating vectors for the modified clustering experiment. In addition, this study uses 'all-but-one' approach for the CF experiment. In order to validate the effectiveness of our proposed model, we apply three comparative types of CF models to the same dataset. The experimental results show that the proposed model outperforms the other comparative models. In especial, the proposed model significantly performs better than two comparative modes with the cluster analysis from the statistical significance test. However, the difference between the proposed model and the naive model does not have statistical significance.

Study on water quality prediction in water treatment plants using AI techniques (AI 기법을 활용한 정수장 수질예측에 관한 연구)

  • Lee, Seungmin;Kang, Yujin;Song, Jinwoo;Kim, Juhwan;Kim, Hung Soo;Kim, Soojun
    • Journal of Korea Water Resources Association
    • /
    • v.57 no.3
    • /
    • pp.151-164
    • /
    • 2024
  • In water treatment plants supplying potable water, the management of chlorine concentration in water treatment processes involving pre-chlorination or intermediate chlorination requires process control. To address this, research has been conducted on water quality prediction techniques utilizing AI technology. This study developed an AI-based predictive model for automating the process control of chlorine disinfection, targeting the prediction of residual chlorine concentration downstream of sedimentation basins in water treatment processes. The AI-based model, which learns from past water quality observation data to predict future water quality, offers a simpler and more efficient approach compared to complex physicochemical and biological water quality models. The model was tested by predicting the residual chlorine concentration downstream of the sedimentation basins at Plant, using multiple regression models and AI-based models like Random Forest and LSTM, and the results were compared. For optimal prediction of residual chlorine concentration, the input-output structure of the AI model included the residual chlorine concentration upstream of the sedimentation basin, turbidity, pH, water temperature, electrical conductivity, inflow of raw water, alkalinity, NH3, etc. as independent variables, and the desired residual chlorine concentration of the effluent from the sedimentation basin as the dependent variable. The independent variables were selected from observable data at the water treatment plant, which are influential on the residual chlorine concentration downstream of the sedimentation basin. The analysis showed that, for Plant, the model based on Random Forest had the lowest error compared to multiple regression models, neural network models, model trees, and other Random Forest models. The optimal predicted residual chlorine concentration downstream of the sedimentation basin presented in this study is expected to enable real-time control of chlorine dosing in previous treatment stages, thereby enhancing water treatment efficiency and reducing chemical costs.

A Study on the Application of RTLS Technology for the Automation of Spray-Applied Fire Resistive Covering Work (뿜칠내화피복 작업 자동화시스템을 위한 RTLS 기술 적용에 관한 연구)

  • Kim, Kyoon-Tai
    • Journal of the Korea Institute of Building Construction
    • /
    • v.9 no.5
    • /
    • pp.79-86
    • /
    • 2009
  • In a steel structure, spray-applied fire resistive materials are crucial in preventing structural strength from being weakened in the event of a fire. The quality control of such materials, however, is difficult for manual workers, who can frequently be in short supply. These skilled workers are also very likely to be exposed to environmental hazards. Problems with construction work such as this, which are specifically the difficulty of achieving quality control and the dangerous nature of the work itself, can be solved to some degree by the introduction of automated equipment. It is, however, very difficult to automate the work process, from operation to the selection of a location for the equipment, as the environment of a construction site has not yet been structured to accommodate automation. This is a fundamental study on the possibility of the automation of spray-applied fire resistive coating work. In this study, the linkability of the cutting-edge RTLS to an automation system is reviewed, and a scenario for the automation of spray-applied fire resistive coating work and system composition is presented. The system suggested in this study is still in a conceptual stage, and as such, there are many restrictions still to be resolved. Despite this fact, automation is expected to have good effectiveness in terms of preventing fire from spreading by maintaining a certain level of strength at a high temperature when a fire occurs, as it maintains the thickness of the fire-resistive coating at a specified level, and secures the integrity of the coating with the steel structure, thereby enhancing the fire-resistive performance. It also expected that if future research is conducted in this area in relation to a cutting-edge monitoring TRS, such as the ubiquitous sensor network (USN) and/or building information model (BIM), it will contribute to raising the level of construction automation in Korea, reducing costs through the systematic and efficient management of construction resources, shortening construction periods, and implementing more precise construction

Analyses of Synchronous Fluorescence Spectra of Dissolved Organic Matter for Tracing Upstream Pollution Sources in Rivers (상류 오염원 추적을 위한 용존 유기물질 Synchronous 형광스펙트럼 분석 연구)

  • Hur, Jin;Kim, Mi-Kyoung;Park, Sung-Won
    • Journal of Korean Society of Environmental Engineers
    • /
    • v.29 no.3
    • /
    • pp.317-324
    • /
    • 2007
  • Fluorescence measurements of dissolved organic matter(DOM) have the superior advantages over other analysis tools for applying to water quality management. A preliminary study was conducted to test the feasibility of applying synchronous fluorescence measurements for tracing and monitoring pollution sources in a small stream located in an upstream area of the Sooyoung watershed in Busan. The water quality of the small stream is affected by leachate from sawdust pile and discharge of untreated sewage. The sampling sites included an upstream site, two pipes discharging untreated sewage, leachate from sawdust, and a downstream site. Of the five field samples, the leachate was distinguished from the other samples by a high peak at a lower wavelength range and a blunt peak at 350nm, suggesting that synchronous fluorescence can be used as a discrimination tool for monitoring the pollution. The efficacy of various indices derived from the spectral features to discriminate the pollution source was tested for well-defined mixture of the sawdust leachate and the upstream stream by comparing (1)the difference between measured values and those predicted based on mass balance and the characteristics of the two samples and (2)the linear correlations between index values and mass ratios of the sample mixtures. Of various discrimination indices selected, fluorescence intensities at 276 nm$({\Delta}\lambda=30nm)$and 347 nm$({\Delta}\lambda=60nm)$ were suggested as promising potential discrimination indices for the sawdust pollution source. Despite the limited number of samples and the study area, this study illustrates the evaluation process that should be followed to develop rapid, low-cost discrimination indices to monitor pollution sources based on end member mixing analyses.

Reassessment on the Four Major Rivers Restoration Project and the Weirs Management (4대강 살리기사업의 재평가와 보의 운용방안)

  • Lee, Jong Ho
    • Journal of Environmental Impact Assessment
    • /
    • v.30 no.4
    • /
    • pp.225-236
    • /
    • 2021
  • The master plan for the Four Rivers Restoration Project (June 2009) was devised, the procedure of pre-environmental review (June 2009) and environmental impact assessment (Nov. 2009), and post-environmental impact survey were implemented, and 4 times audits also inspected. and finally the Ministry of Environment's Four Rivers Investigation and Evaluation Planning Committee proposed the dismantling or partial dismantling of the five weirs of the Geum River and Yeongsan River. But controversies and conflicts are still ongoing. Therefore, this study intend to reestablish the management plan for the four major rivers by reviewing and analyzing the process so far. The results are as follows. First, a cost-benefit analysis should be performed by comparing the water quality impact of weir operation and weir opening. Therefore, it is inevitably difficult to conduct cost-benefit analysis. Second, according to the results of cost-benefit analysis on the dismantling of the Geum River and the Yeongsan River, the dismantling of the weir and the regular sluice gate opening was decided. However, there is a problem in the validity of the decision to dismantle the weir because the cost-benefit analysis for maintaining the weir is not carried out. Third, looking at the change in water quality of 16 weirs before and after the Four Major Rivers Restoration Project, COD and Chl-a were generally deteriorated, and BOD, SS, T-N, and T-P improved. However, in the cost-benefit analysis related to water quality at the time of weir dismantling, only COD items were targeted. Therefore, the cost of BOD, SS, T-N, and T-P items improved after the project were not reflected in the cost-benefit analysis of dismantling weirs, so the water quality benefits were exaggerated. Fourth, in the case of Gongju weir and Juksan weir, most of them are movable weirs, so opening the weir alone can have the same effect as dismantling when the water quality deteriorates. Since the same effect can be expected, there is little need to dismantle the weirs. Fifth, in order to respond to frequent droughts and floods, it is desirable to secure the agricultural water supply capacity to the drought areas upstream of the four majorrivers by constructing a waterway connected to the weir. At present it is necessary to keep weirs rather than dismantling them.

The Analysis on the Relationship between Firms' Exposures to SNS and Stock Prices in Korea (기업의 SNS 노출과 주식 수익률간의 관계 분석)

  • Kim, Taehwan;Jung, Woo-Jin;Lee, Sang-Yong Tom
    • Asia pacific journal of information systems
    • /
    • v.24 no.2
    • /
    • pp.233-253
    • /
    • 2014
  • Can the stock market really be predicted? Stock market prediction has attracted much attention from many fields including business, economics, statistics, and mathematics. Early research on stock market prediction was based on random walk theory (RWT) and the efficient market hypothesis (EMH). According to the EMH, stock market are largely driven by new information rather than present and past prices. Since it is unpredictable, stock market will follow a random walk. Even though these theories, Schumaker [2010] asserted that people keep trying to predict the stock market by using artificial intelligence, statistical estimates, and mathematical models. Mathematical approaches include Percolation Methods, Log-Periodic Oscillations and Wavelet Transforms to model future prices. Examples of artificial intelligence approaches that deals with optimization and machine learning are Genetic Algorithms, Support Vector Machines (SVM) and Neural Networks. Statistical approaches typically predicts the future by using past stock market data. Recently, financial engineers have started to predict the stock prices movement pattern by using the SNS data. SNS is the place where peoples opinions and ideas are freely flow and affect others' beliefs on certain things. Through word-of-mouth in SNS, people share product usage experiences, subjective feelings, and commonly accompanying sentiment or mood with others. An increasing number of empirical analyses of sentiment and mood are based on textual collections of public user generated data on the web. The Opinion mining is one domain of the data mining fields extracting public opinions exposed in SNS by utilizing data mining. There have been many studies on the issues of opinion mining from Web sources such as product reviews, forum posts and blogs. In relation to this literatures, we are trying to understand the effects of SNS exposures of firms on stock prices in Korea. Similarly to Bollen et al. [2011], we empirically analyze the impact of SNS exposures on stock return rates. We use Social Metrics by Daum Soft, an SNS big data analysis company in Korea. Social Metrics provides trends and public opinions in Twitter and blogs by using natural language process and analysis tools. It collects the sentences circulated in the Twitter in real time, and breaks down these sentences into the word units and then extracts keywords. In this study, we classify firms' exposures in SNS into two groups: positive and negative. To test the correlation and causation relationship between SNS exposures and stock price returns, we first collect 252 firms' stock prices and KRX100 index in the Korea Stock Exchange (KRX) from May 25, 2012 to September 1, 2012. We also gather the public attitudes (positive, negative) about these firms from Social Metrics over the same period of time. We conduct regression analysis between stock prices and the number of SNS exposures. Having checked the correlation between the two variables, we perform Granger causality test to see the causation direction between the two variables. The research result is that the number of total SNS exposures is positively related with stock market returns. The number of positive mentions of has also positive relationship with stock market returns. Contrarily, the number of negative mentions has negative relationship with stock market returns, but this relationship is statistically not significant. This means that the impact of positive mentions is statistically bigger than the impact of negative mentions. We also investigate whether the impacts are moderated by industry type and firm's size. We find that the SNS exposures impacts are bigger for IT firms than for non-IT firms, and bigger for small sized firms than for large sized firms. The results of Granger causality test shows change of stock price return is caused by SNS exposures, while the causation of the other way round is not significant. Therefore the correlation relationship between SNS exposures and stock prices has uni-direction causality. The more a firm is exposed in SNS, the more is the stock price likely to increase, while stock price changes may not cause more SNS mentions.

Designing Mobile Framework for Intelligent Personalized Marketing Service in Interactive Exhibition Space (인터랙티브 전시 환경에서 개인화 마케팅 서비스를 위한 모바일 프레임워크 설계)

  • Bae, Jong-Hwan;Sho, Su-Hwan;Choi, Lee-Kwon
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.1
    • /
    • pp.59-69
    • /
    • 2012
  • As exhibition industry, which is a part of 17 new growth engines of the government, is related to other industries such as tourism, transportation and financial industries. So it has a significant ripple effect on other industries. Exhibition is a knowledge-intensive, eco-friendly and high value-added Industry. Over 13,000 exhibitions are held every year around the world which contributes to getting foreign currency. Exhibition industry is closely related with culture and tourism and could be utilized as local and national development strategies and improve national brand image as well. Many countries try various efforts to invigorate exhibition industry by arranging related laws and support system. In Korea, more than 200 exhibitions are being held every year, but only 2~3 exhibitions are hosted with over 400 exhibitors and except these exhibitions most exhibitions have few foreign exhibitors. The main reason of weakness of domestic trade show is that there are no agencies managing exhibitionrelated statistics and there is no specific and reliable evaluation. This might cause impossibility of providing buyer or seller with reliable data, poor growth of exhibitions in terms of quality and thus service quality of trade shows cannot be improved. Hosting a lot of visitors (Public/Buyer/Exhibitor) is very crucial to the development of domestic exhibition industry. In order to attract many visitors, service quality of exhibition and visitor's satisfaction should be enhanced. For this purpose, a variety of real-time customized services through digital media and the services for creating new customers and retaining existing customers should be provided. In addition, by providing visitors with personalized information services they could manage their time and space efficiently avoiding the complexity of exhibition space. Exhibition industry can have competitiveness and industrial foundation through building up exhibition-related statistics, creating new information and enhancing research ability. Therefore, this paper deals with customized service with visitor's smart-phone at the exhibition space and designing mobile framework which enables exhibition devices to interact with other devices. Mobile server framework is composed of three different systems; multi-server interaction, server, client, display device. By making knowledge pool of exhibition environment, the accumulated data for each visitor can be provided as personalized service. In addition, based on the reaction of visitors each of all information is utilized as customized information and so the cyclic chain structure is designed. Multiple interaction server is designed to have functions of event handling, interaction process between exhibition device and visitor's smart-phone and data management. Client is an application processed by visitor's smart-phone and could be driven on a variety of platforms. Client functions as interface representing customized service for individual visitors and event input and output for simultaneous participation. Exhibition device consists of display system to show visitors contents and information, interaction input-output system to receive event from visitors and input toward action and finally the control system to connect above two systems. The proposed mobile framework in this paper provides individual visitors with customized and active services using their information profile and advanced Knowledge. In addition, user participation service is suggested as well by using interaction connection system between server, client, and exhibition devices. Suggested mobile framework is a technology which could be applied to culture industry such as performance, show and exhibition. Thus, this builds up the foundation to improve visitor's participation in exhibition and bring about development of exhibition industry by raising visitor's interest.

Effect of Market Basket Size on the Accuracy of Association Rule Measures (장바구니 크기가 연관규칙 척도의 정확성에 미치는 영향)

  • Kim, Nam-Gyu
    • Asia pacific journal of information systems
    • /
    • v.18 no.2
    • /
    • pp.95-114
    • /
    • 2008
  • Recent interests in data mining result from the expansion of the amount of business data and the growing business needs for extracting valuable knowledge from the data and then utilizing it for decision making process. In particular, recent advances in association rule mining techniques enable us to acquire knowledge concerning sales patterns among individual items from the voluminous transactional data. Certainly, one of the major purposes of association rule mining is to utilize acquired knowledge in providing marketing strategies such as cross-selling, sales promotion, and shelf-space allocation. In spite of the potential applicability of association rule mining, unfortunately, it is not often the case that the marketing mix acquired from data mining leads to the realized profit. The main difficulty of mining-based profit realization can be found in the fact that tremendous numbers of patterns are discovered by the association rule mining. Due to the many patterns, data mining experts should perform additional mining of the results of initial mining in order to extract only actionable and profitable knowledge, which exhausts much time and costs. In the literature, a number of interestingness measures have been devised for estimating discovered patterns. Most of the measures can be directly calculated from what is known as a contingency table, which summarizes the sales frequencies of exclusive items or itemsets. A contingency table can provide brief insights into the relationship between two or more itemsets of concern. However, it is important to note that some useful information concerning sales transactions may be lost when a contingency table is constructed. For instance, information regarding the size of each market basket(i.e., the number of items in each transaction) cannot be described in a contingency table. It is natural that a larger basket has a tendency to consist of more sales patterns. Therefore, if two itemsets are sold together in a very large basket, it can be expected that the basket contains two or more patterns and that the two itemsets belong to mutually different patterns. Therefore, we should classify frequent itemset into two categories, inter-pattern co-occurrence and intra-pattern co-occurrence, and investigate the effect of the market basket size on the two categories. This notion implies that any interestingness measures for association rules should consider not only the total frequency of target itemsets but also the size of each basket. There have been many attempts on analyzing various interestingness measures in the literature. Most of them have conducted qualitative comparison among various measures. The studies proposed desirable properties of interestingness measures and then surveyed how many properties are obeyed by each measure. However, relatively few attentions have been made on evaluating how well the patterns discovered by each measure are regarded to be valuable in the real world. In this paper, attempts are made to propose two notions regarding association rule measures. First, a quantitative criterion for estimating accuracy of association rule measures is presented. According to this criterion, a measure can be considered to be accurate if it assigns high scores to meaningful patterns that actually exist and low scores to arbitrary patterns that co-occur by coincidence. Next, complementary measures are presented to improve the accuracy of traditional association rule measures. By adopting the factor of market basket size, the devised measures attempt to discriminate the co-occurrence of itemsets in a small basket from another co-occurrence in a large basket. Intensive computer simulations under various workloads were performed in order to analyze the accuracy of various interestingness measures including traditional measures and the proposed measures.