• Title/Summary/Keyword: Data hit rate

Search Result 86, Processing Time 0.03 seconds

An LMI Approach to Robust Congestion Control of ATM Networks

  • Lin Jun;Xie Lihua;Zhang Huanshui
    • International Journal of Control, Automation, and Systems
    • /
    • v.4 no.1
    • /
    • pp.53-62
    • /
    • 2006
  • In this paper, ATM network congestion control with explicit rate feedback is considered. In ATM networks, delays commonly appear in data transmission and have to be considered in congestion control design. In this paper, a bounded single round delay on the return path is considered. Our objective is to design an explicit rate feedback control that achieves a robust optimal $H_2$ performance regardless of the bounded time-varying delays. An optimization approach in terms of linear matrix inequalities (LMIs) is given. Saturation in source rate and queue buffer is also taken into consideration in the proposed design. Simulations for the cases of single source and multiple sources are presented to demonstrate the effectiveness of the design.

The Prediction of Currency Crises through Artificial Neural Network (인공신경망을 이용한 경제 위기 예측)

  • Lee, Hyoung Yong;Park, Jung Min
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.4
    • /
    • pp.19-43
    • /
    • 2016
  • This study examines the causes of the Asian exchange rate crisis and compares it to the European Monetary System crisis. In 1997, emerging countries in Asia experienced financial crises. Previously in 1992, currencies in the European Monetary System had undergone the same experience. This was followed by Mexico in 1994. The objective of this paper lies in the generation of useful insights from these crises. This research presents a comparison of South Korea, United Kingdom and Mexico, and then compares three different models for prediction. Previous studies of economic crisis focused largely on the manual construction of causal models using linear techniques. However, the weakness of such models stems from the prevalence of nonlinear factors in reality. This paper uses a structural equation model to analyze the causes, followed by a neural network model to circumvent the linear model's weaknesses. The models are examined in the context of predicting exchange rates In this paper, data were quarterly ones, and Consumer Price Index, Gross Domestic Product, Interest Rate, Stock Index, Current Account, Foreign Reserves were independent variables for the prediction. However, time periods of each country's data are different. Lisrel is an emerging method and as such requires a fresh approach to financial crisis prediction model design, along with the flexibility to accommodate unexpected change. This paper indicates the neural network model has the greater prediction performance in Korea, Mexico, and United Kingdom. However, in Korea, the multiple regression shows the better performance. In Mexico, the multiple regression is almost indifferent to the Lisrel. Although Lisrel doesn't show the significant performance, the refined model is expected to show the better result. The structural model in this paper should contain the psychological factor and other invisible areas in the future work. The reason of the low hit ratio is that the alternative model in this paper uses only the financial market data. Thus, we cannot consider the other important part. Korea's hit ratio is lower than that of United Kingdom. So, there must be the other construct that affects the financial market. So does Mexico. However, the United Kingdom's financial market is more influenced and explained by the financial factors than Korea and Mexico.

A study on the prediction of korean NPL market return (한국 NPL시장 수익률 예측에 관한 연구)

  • Lee, Hyeon Su;Jeong, Seung Hwan;Oh, Kyong Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.2
    • /
    • pp.123-139
    • /
    • 2019
  • The Korean NPL market was formed by the government and foreign capital shortly after the 1997 IMF crisis. However, this market is short-lived, as the bad debt has started to increase after the global financial crisis in 2009 due to the real economic recession. NPL has become a major investment in the market in recent years when the domestic capital market's investment capital began to enter the NPL market in earnest. Although the domestic NPL market has received considerable attention due to the overheating of the NPL market in recent years, research on the NPL market has been abrupt since the history of capital market investment in the domestic NPL market is short. In addition, decision-making through more scientific and systematic analysis is required due to the decline in profitability and the price fluctuation due to the fluctuation of the real estate business. In this study, we propose a prediction model that can determine the achievement of the benchmark yield by using the NPL market related data in accordance with the market demand. In order to build the model, we used Korean NPL data from December 2013 to December 2017 for about 4 years. The total number of things data was 2291. As independent variables, only the variables related to the dependent variable were selected for the 11 variables that indicate the characteristics of the real estate. In order to select the variables, one to one t-test and logistic regression stepwise and decision tree were performed. Seven independent variables (purchase year, SPC (Special Purpose Company), municipality, appraisal value, purchase cost, OPB (Outstanding Principle Balance), HP (Holding Period)). The dependent variable is a bivariate variable that indicates whether the benchmark rate is reached. This is because the accuracy of the model predicting the binomial variables is higher than the model predicting the continuous variables, and the accuracy of these models is directly related to the effectiveness of the model. In addition, in the case of a special purpose company, whether or not to purchase the property is the main concern. Therefore, whether or not to achieve a certain level of return is enough to make a decision. For the dependent variable, we constructed and compared the predictive model by calculating the dependent variable by adjusting the numerical value to ascertain whether 12%, which is the standard rate of return used in the industry, is a meaningful reference value. As a result, it was found that the hit ratio average of the predictive model constructed using the dependent variable calculated by the 12% standard rate of return was the best at 64.60%. In order to propose an optimal prediction model based on the determined dependent variables and 7 independent variables, we construct a prediction model by applying the five methodologies of discriminant analysis, logistic regression analysis, decision tree, artificial neural network, and genetic algorithm linear model we tried to compare them. To do this, 10 sets of training data and testing data were extracted using 10 fold validation method. After building the model using this data, the hit ratio of each set was averaged and the performance was compared. As a result, the hit ratio average of prediction models constructed by using discriminant analysis, logistic regression model, decision tree, artificial neural network, and genetic algorithm linear model were 64.40%, 65.12%, 63.54%, 67.40%, and 60.51%, respectively. It was confirmed that the model using the artificial neural network is the best. Through this study, it is proved that it is effective to utilize 7 independent variables and artificial neural network prediction model in the future NPL market. The proposed model predicts that the 12% return of new things will be achieved beforehand, which will help the special purpose companies make investment decisions. Furthermore, we anticipate that the NPL market will be liquidated as the transaction proceeds at an appropriate price.

Evaluation and Quality Control of Data in the Digital Library System (디지털자료실지원센터 종합목록 데이터 품질평가 및 관리 방안)

  • Choe In-Sook
    • Journal of the Korean Society for Library and Information Science
    • /
    • v.38 no.3
    • /
    • pp.119-139
    • /
    • 2004
  • This study intends to evaluate the quality of the Digital Library System DB and to suggest methods for its quality control. The evaluation criteria are hit rates, redundancy, completeness and accuracy. In spite of high hit rates excessive records representing one work resulted serious redundancy. The average completeness rate of records was $48.12\%$ due to low level of description. The Analysis of accuracy showed various errors in most of records corresponding to $92\%$. Emphasis on analysing the errors in detail detected the causing factors and suggested practical guidelines for school libraries' catalogers.

The Multiple Index Approach for the Evaluation of Tourism and Recreation Related Pictograms (MIA를 이용한 관광.휴양관련 픽토그램의 인지효과 평가)

  • Kim Jeong-Min;Yoo Ki-Joon
    • Korean Journal of Environment and Ecology
    • /
    • v.20 no.3
    • /
    • pp.319-330
    • /
    • 2006
  • It is imperative that pictograms as pictorial information be empirically tested in order to establish whether the users do indeed associate the appropriate referent in an actual usage situation. The experiment employing the Multiple Index Approach was conducted in a class room with 64 subjects to evaluate tourism and recreation related pictograms. Performance data(hit rate, false alarm and missing value) of 25 pictograms were collected and the average hit rate as a prime index of pictogram associativeness was 65.82%. The matrix analysis showed 14 pictograms were high in subjective certainty and subjective suitability. The other 11, which were low in both criteria may require prior learning or improvement of the pictogram designs to represent their meanings more distinctively.

Fault Tolerant Cache for Soft Error (소프트에러 결함 허용 캐쉬)

  • Lee, Jong-Ho;Cho, Jun-Dong;Pyo, Jung-Yul;Park, Gi-Ho
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.57 no.1
    • /
    • pp.128-136
    • /
    • 2008
  • In this paper, we propose a new cache structure for effective error correction of soft error. We added check bit and SEEB(soft error evaluation block) to evaluate the status of cache line. The SEEB stores result of parity check into the two-bit shit register and set the check bit to '1' when parity check fails twice in the same cache line. In this case the line where parity check fails twice is treated as a vulnerable to soft error. When the data is filled into the cache, the new replacement algorithm is suggested that it can only use the valid block determined by SEEB. This structure prohibits the vulnerable line from being used and contributes to efficient use of cache by the reuse of line where parity check fails only once can be reused. We tried to minimize the side effect of the proposed cache and the experimental results, using SPEC2000 benchmark, showed 3% degradation in hit rate, 15% timing overhead because of parity logic and 2.7% area overhead. But it can be considered as trivial for SEEB because almost tolerant design inevitably adopt this parity method even if there are some overhead. And if only parity logic is used then it can have $5%{\sim}10%$ advantage than ECC logic. By using this proposed cache, the system will be protected from the threat of soft error in cache and the hit rate can be maintained to the level without soft error in the cache.

Comparative Study of Dimension Reduction Methods for Highly Imbalanced Overlapping Churn Data

  • Lee, Sujee;Koo, Bonhyo;Jung, Kyu-Hwan
    • Industrial Engineering and Management Systems
    • /
    • v.13 no.4
    • /
    • pp.454-462
    • /
    • 2014
  • Retention of possible churning customer is one of the most important issues in customer relationship management, so companies try to predict churn customers using their large-scale high-dimensional data. This study focuses on dealing with large data sets by reducing the dimensionality. By using six different dimension reduction methods-Principal Component Analysis (PCA), factor analysis (FA), locally linear embedding (LLE), local tangent space alignment (LTSA), locally preserving projections (LPP), and deep auto-encoder-our experiments apply each dimension reduction method to the training data, build a classification model using the mapped data and then measure the performance using hit rate to compare the dimension reduction methods. In the result, PCA shows good performance despite its simplicity, and the deep auto-encoder gives the best overall performance. These results can be explained by the characteristics of the churn prediction data that is highly correlated and overlapped over the classes. We also proposed a simple out-of-sample extension method for the nonlinear dimension reduction methods, LLE and LTSA, utilizing the characteristic of the data.

Cache Memory and Replacement Algorithm Implementation and Performance Comparison

  • Park, Na Eun;Kim, Jongwan;Jeong, Tae Seog
    • Journal of the Korea Society of Computer and Information
    • /
    • v.25 no.3
    • /
    • pp.11-17
    • /
    • 2020
  • In this paper, we propose practical results for cache replacement policy by measuring cache hit and search time for each replacement algorithm through cache simulation. Thus, the structure of each cache memory and the four types of alternative policies of FIFO, LFU, LRU and Random were implemented in software to analyze the characteristics of each technique. The paper experiment showed that the LRU algorithm showed hit rate and search time of 36.044% and 577.936ns in uniform distribution, 45.636% and 504.692ns in deflection distribution, while the FIFO algorithm showed similar performance to the LRU algorithm at 36.078% and 554.772ns in even distribution and 45.662% and 489.574ns in bias distribution. Then LFU followed, Random algorithm was measured at 30.042% and 622.866ns at even distribution, 36.36% at deflection distribution and 553.878ns at lowest performance. The LRU replacement method commonly used in cache memory has the complexity of implementation, but it is the most efficient alternative to conventional alternative algorithms, indicating that it is a reasonable alternative method considering the reference information of data.

Neighbor Cooperation Based In-Network Caching for Content-Centric Networking

  • Luo, Xi;An, Ying
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.11 no.5
    • /
    • pp.2398-2415
    • /
    • 2017
  • Content-Centric Networking (CCN) is a new Internet architecture with routing and caching centered on contents. Through its receiver-driven and connectionless communication model, CCN natively supports the seamless mobility of nodes and scalable content acquisition. In-network caching is one of the core technologies in CCN, and the research of efficient caching scheme becomes increasingly attractive. To address the problem of unbalanced cache load distribution in some existing caching strategies, this paper presents a neighbor cooperation based in-network caching scheme. In this scheme, the node with the highest betweenness centrality in the content delivery path is selected as the central caching node and the area of its ego network is selected as the caching area. When the caching node has no sufficient resource, part of its cached contents will be picked out and transferred to the appropriate neighbor by comprehensively considering the factors, such as available node cache, cache replacement rate and link stability between nodes. Simulation results show that our scheme can effectively enhance the utilization of cache resources and improve cache hit rate and average access cost.

The Discriminant Analysis of Blood Pressure - Including the Risk Factors - (혈압 판별 분석 -위험요인을 중심으로-)

  • 오현수;서화숙
    • Journal of Korean Academy of Nursing
    • /
    • v.28 no.2
    • /
    • pp.256-269
    • /
    • 1998
  • The purpose of this study was to evaluate the usefulness of variables which were known to be related to blood pressure for discriminating between hypertensive and normotensive groups. Variables were obesity, serum lipids, life style-related variables such as smoking, alcohol, exercise, and stress, and demographic variables such as age, economical status, and education. The data were collected from 400 male clients who visited one university hospital located in Incheon, Republic of Korea, from May 1996 to December 1996 for a regular physical examination. Variables which showed significance for discriminating systolic blood pressure in this study were age, serum lipids, education, HDL, exercise, total cholesterol, body fat percent, alcohol, stress, and smoking(in order of significance). By using the combination of these variables, the possibility of proper prediction for a high-systolic pressure group was 2%, predicting a normal-systolic pressure group was 70.3%, and total Hit Ratio was 70%. Variables which showed significance for discriminating diastolic blood pressure were exercise, triglyceride, alcohol, smoking, economical status, age, and BMI (in order of significance). By using the combination of these variables, the possibility of proper prediction for a high-diastolic pressure group was 71.2%, predicting a normal-diastolic pressure group was 71.3%, and total Hit Ratio was 71.3%. Multiple regression analysis was performed to examine the association of systolic blood pressure with life style-related variables after adjustment for obesity, serum lipids, and demographic variables. First, the effect of demographic variable alone on the systolic blood pressure was statistically significant (p=.000) and adjusted $R^2$was 0.09. Adding the variable obesity on demographic variables resulted in raising adjusted $R^2$to 0.11 (p=.000) : therefore, the contribution rate of obesity on the systolic blood pressure was 2.0%. On the next step, adding the variable serum lipids on the obesity and demographic variables resulted in raising adjusted R2 to 0.12(P=.000) : therefore, the contribution rate of serum lipid on the systolic pressure was 1.0%. Finally, adding life style-related variables on all other variables resulted in raising the adjusted $R^2$to 0.18(p=.000) ; therefore, the contribution rate of life style-related variables on the systolic blood pressure after adjustment for obesity, serum lipids, and demographic variables was 6.0%. Multiple regression analysis was also performed to examine the association of diastolic blood pressure with life style-related variables after adjustment for obesity, serum lipids, and demographic variables. First, the effect of demographic variable alone on the diastolic blood pressure was statistically significant (p=.01) and adjusted $R^2$was 0.03. Adding the variable obesity on demographic variables resulted in raising adjusted $R^2$to 0.06 (p=.000) ; therefore, the contribution rate of obesity on the diastolic blood pressure was 3.0%. On the next step, adding the variable serum lipids on the obesity and demographic variables resulted in raising the adjusted $R^2$ to 0.09(p=.000) ; therefore, the contribution rate of serum lipid on the diastolic pressure was 3.0%. Finally, adding life style-related variables on all other variables resulted in raising the adjusted $R^2$ to 0.12 (p=.000) : therefore, the contribution rate of life style-related variables on the systolic blood pressure after adjustment for obesity, serum lipids, and demographic variables was 3.0%.

  • PDF