• Title/Summary/Keyword: the Information Poor

Search Result 1,963, Processing Time 0.037 seconds

Article Data Prefetching Policy using User Access Patterns in News-On-demand System (주문형 전자신문 시스템에서 사용자 접근패턴을 이용한 기사 프리패칭 기법)

  • Kim, Yeong-Ju;Choe, Tae-Uk
    • The Transactions of the Korea Information Processing Society
    • /
    • v.6 no.5
    • /
    • pp.1189-1202
    • /
    • 1999
  • As compared with VOD data, NOD article data has the following characteristics: it is created at any time, has a short life cycle, is selected as not one article but several articles by a user, and has high access locality in time. Because of these intrinsic features, user access patterns of NOD article data are different from those of VOD. Thus, building NOD system using the existing techniques of VOD system leads to poor performance. In this paper, we analysis the log file of a currently running electronic newspaper, show that the popularity distribution of NOD articles is different from Zipf distribution of VOD data, and suggest a new popularity model of NOD article data MS-Zipf(Multi-Selection Zipf) distribution and its approximate solution. Also we present a life cycle model of NOD article data, which shows changes of popularity over time. Using this life cycle model, we develop LLBF (Largest Life-cycle Based Frequency) prefetching algorithm and analysis he performance by simulation. The developed LLBF algorithm supports the similar level in hit-ratio to the other prefetching algorithms such as LRU(Least Recently Used) etc, while decreasing the number of data replacement in article prefetching and reducing the overhead of the prefetching in system performance. Using the accurate user access patterns of NOD article data, we could analysis correctly the performance of NOD server system and develop the efficient policies in the implementation of NOD server system.

  • PDF

A reordering scheme for the vectorizable preconditioner for the large sparse linear systems on the CRAY-2 (CRAY-2에서의 대형희귀행렬 연립방정식의 해법을 위한 벡터준비행렬의 재배열 방법)

  • Ma, Sang-Baek
    • The Transactions of the Korea Information Processing Society
    • /
    • v.2 no.6
    • /
    • pp.960-968
    • /
    • 1995
  • In this paper we present a reordering scheme that could lead to efficient vectorization of the preconditioners for the large sparse linear systems arising from partial differential equations on the CRAY-2, This reordering scheme is a line version of the conventional red/black ordering. This reordering scheme, coupled with a variant of ILU(Incomplete LU) preconditioning, can overcome the poor rate of convergence of the conventional red/black reordering, if relatively large number of fill-ins were used. We substantiate our claim by conducting various experiments on the CRAY-2 machine. Also, the computation of the Frobenius norm of the error matrices agree with our claim.

  • PDF

Efficient Channel Management to Maximize Spectrum Holes in Cognitive Radio Networks (CR 네트워크에서의 유휴자원 증대를 위한 효율적인 채널 관리 방법)

  • Jeong, Pil-Jung;Shin, Yo-An;Lee, Won-Cheol;Yoo, Myung-Sik
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.32 no.10B
    • /
    • pp.621-629
    • /
    • 2007
  • In cognitive radio (CR) network, the channels are generally classified into either the unavailable channels that are occupied by incumbent users or the available channels that are not occupied. The conventional channel classification scheme may result in poor utilization of spectrum holes since it does not take the spatial relationship between CR node and incumbent users into consideration. In this paper, we propose an efficient channel management scheme for the centralized CR network to maximize the spectrum holes by overcoming the shortcomings of conventional scheme. In addition, we mathematically analyze the effectiveness of proposed scheme. Based on the proposed channel management scheme, we also propose the rendezvous algorithm, which can establish the control channels between base station and CR node under the dynamically changing spectrum environment.

Detection of Microcalcification Using the Wavelet Based Adaptive Sigmoid Function and Neural Network

  • Kumar, Sanjeev;Chandra, Mahesh
    • Journal of Information Processing Systems
    • /
    • v.13 no.4
    • /
    • pp.703-715
    • /
    • 2017
  • Mammogram images are sensitive in nature and even a minor change in the environment affects the quality of the images. Due to the lack of expert radiologists, it is difficult to interpret the mammogram images. In this paper an algorithm is proposed for a computer-aided diagnosis system, which is based on the wavelet based adaptive sigmoid function. The cascade feed-forward back propagation technique has been used for training and testing purposes. Due to the poor contrast in digital mammogram images it is difficult to process the images directly. Thus, the images were first processed using the wavelet based adaptive sigmoid function and then the suspicious regions were selected to extract the features. A combination of texture features and gray-level co-occurrence matrix features were extracted and used for training and testing purposes. The system was trained with 150 images, while a total 100 mammogram images were used for testing. A classification accuracy of more than 95% was obtained with our proposed method.

DABC: A dynamic ARX-based lightweight block cipher with high diffusion

  • Wen, Chen;Lang, Li;Ying, Guo
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.17 no.1
    • /
    • pp.165-184
    • /
    • 2023
  • The ARX-based lightweight block cipher is widely used in resource-constrained IoT devices due to fast and simple operation of software and hardware platforms. However, there are three weaknesses to ARX-based lightweight block ciphers. Firstly, only half of the data can be changed in one round. Secondly, traditional ARX-based lightweight block ciphers are static structures, which provide limited security. Thirdly, it has poor diffusion when the initial plaintext and key are all 0 or all 1. This paper proposes a new dynamic ARX-based lightweight block cipher to overcome these weaknesses, called DABC. DABC can change all data in one round, which overcomes the first weakness. This paper combines the key and the generalized two-dimensional cat map to construct a dynamic permutation layer P1, which improves the uncertainty between different rounds of DABC. The non-linear component of the round function alternately uses NAND gate and AND gate to increase the complexity of the attack, which overcomes the third weakness. Meanwhile, this paper proposes the round-based architecture of DABC and conducted ASIC and FPGA implementation. The hardware results show that DABC has less hardware resource and high throughput. Finally, the safety evaluation results show that DABC has a good avalanche effect and security.

Stock Price Prediction and Portfolio Selection Using Artificial Intelligence

  • Sandeep Patalay;Madhusudhan Rao Bandlamudi
    • Asia pacific journal of information systems
    • /
    • v.30 no.1
    • /
    • pp.31-52
    • /
    • 2020
  • Stock markets are popular investment avenues to people who plan to receive premium returns compared to other financial instruments, but they are highly volatile and risky due to the complex financial dynamics and poor understanding of the market forces involved in the price determination. A system that can forecast, predict the stock prices and automatically create a portfolio of top performing stocks is of great value to individual investors who do not have sufficient knowledge to understand the complex dynamics involved in evaluating and predicting stock prices. In this paper the authors propose a Stock prediction, Portfolio Generation and Selection model based on Machine learning algorithms, Artificial neural networks (ANNs) are used for stock price prediction, Mathematical and Statistical techniques are used for Portfolio generation and Un-Supervised Machine learning based on K-Means Clustering algorithms are used for Portfolio Evaluation and Selection which take in to account the Portfolio Return and Risk in to consideration. The model presented here is limited to predicting stock prices on a long term basis as the inputs to the model are based on fundamental attributes and intrinsic value of the stock. The results of this study are quite encouraging as the stock prediction models are able predict stock prices at least a financial quarter in advance with an accuracy of around 90 percent and the portfolio selection classifiers are giving returns in excess of average market returns.

Quality Enhancement for Hybrid 3DTV with Mixed Resolution Using Conditional Replenishment Algorithm

  • Jung, Kyeong-Hoon;Bang, Min-Suk;Kim, Sung-Hoon;Choo, Hyon-Gon;Kang, Dong-Wook
    • ETRI Journal
    • /
    • v.36 no.5
    • /
    • pp.752-760
    • /
    • 2014
  • This paper proposes a conditional replenishment algorithm (CRA) to improve the visual quality (where spatial resolutions of the left and right views are mismatched) of a hybrid stereoscopic 3DTV that is based on the ATSC-M/H standard. So as to generate an enhanced view, the CRA is to choose the better substitute among a disparity-compensated view with high quality and a simply interpolated view. The CRA generates a disparity map that includes modes and disparity vectors as additional information. It also employs a quad-tree structure with variable block size by considering the spatial correlation of disparity vectors. In addition, it takes advantage of the disparity map used in a previous frame to keep the amount of additional information as small as possible. The simulation results show that the proposed CRA can successfully improve the peak signal-to-noise ratio of a poor-quality view and consequently have a positive effect on the subjective quality of the resulting 3D view.

FaST: Fine-grained and Scalable TCP for Cloud Data Center Networks

  • Hwang, Jaehyun;Yoo, Joon
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.8 no.3
    • /
    • pp.762-777
    • /
    • 2014
  • With the increasing usage of cloud applications such as MapReduce and social networking, the amount of data traffic in data center networks continues to grow. Moreover, these appli-cations follow the incast traffic pattern, where a large burst of traffic sent by a number of senders, accumulates simultaneously at the shallow-buffered data center switches. This causes severe packet losses. The currently deployed TCP is custom-tailored for the wide-area Internet. This causes cloud applications to suffer long completion times towing to the packet losses, and hence, results in a poor quality of service. An Explicit Congestion Notification (ECN)-based approach is an attractive solution that conservatively adjusts to the network congestion in advance. This legacy approach, however, lacks scalability in terms of the number of flows. In this paper, we reveal the primary cause of the scalability issue through analysis, and propose a new congestion-control algorithm called FaST. FaST employs a novel, virtual congestion window to conduct fine-grained congestion control that results in improved scalability. Fur-thermore, FaST is easy to deploy since it requires only a few software modifications at the server-side. Through ns-3 simulations, we show that FaST improves the scalability of data center networks compared with the existing approaches.

EPfuzzer: Improving Hybrid Fuzzing with Hardest-to-reach Branch Prioritization

  • Wang, Yunchao;Wu, Zehui;Wei, Qiang;Wang, Qingxian
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.14 no.9
    • /
    • pp.3885-3906
    • /
    • 2020
  • Hybrid fuzzing which combines fuzzing and concolic execution, has proved its ability to achieve higher code coverage and therefore find more bugs. However, current hybrid fuzzers usually suffer from inefficiency and poor scalability when applied to complex, real-world program testing. We observed that the performance bottleneck is the inefficient cooperation between the fuzzer and concolic executor and the slow symbolic emulation. In this paper, we propose a novel solution named EPfuzzer to improve hybrid fuzzing. EPfuzzer implements two key ideas: 1) only the hardest-to-reach branch will be prioritized for concolic execution to avoid generating uninteresting inputs; and 2) only input bytes relevant to the target branch to be flipped will be symbolized to reduce the overhead of the symbolic emulation. With these optimizations, EPfuzzer can be efficiently targeted to the hardest-to-reach branch. We evaluated EPfuzzer with three sets of programs: five real-world applications and two popular benchmarks (LAVA-M and the Google Fuzzer Test Suite). The evaluation results showed that EPfuzzer was much more efficient and scalable than the state-of-the-art concolic execution engine (QSYM). EPfuzzer was able to find more bugs and achieve better code coverage. In addition, we discovered seven previously unknown security bugs in five real-world programs and reported them to the vendors.

A Study on the Measurement and Improvement of Service Quality using QFD in the Internet Shoppingmall (QFD를 이용한 인터넷 쇼핑몰의 서비스 품질 측정 및 개선에 관한 연구)

  • Jung Sang-Chul;Yoo Hae-Rim;Kim Myeong-Suk
    • Journal of Information Technology Applications and Management
    • /
    • v.11 no.4
    • /
    • pp.181-208
    • /
    • 2004
  • By the developing of Internet. the environments of the company have rapidly been changed. Especially. managers in the Internet shoppingmall have been try to provide excellent e-Services to their customers. e-Service i~ defined comprised of all interactive services that are delivered on the Internet uSing advanced telecommunications. information, and multimedia technologies. but according to study of e-Satisfy. com[2000]. customer service through internet is still neither effective nor efficient and poor service will impact on company's profit but excellent service can improve their value and quality of the service or product. In order to customer-oriented e-Services. this study suggested the QFD linked with e-Service quality model for the Internet shoppingmall service system. which can help determine design characteristics being relevant to customer's e-Service quality requirements. this hybrid model have two stages. In the first stage. we do measure service quality and find priorities of service quality attribute by purchase process. and in the second stage. on the basis of priority of e-Service quality attributes, we find design characteristics to maximize customer satisfaction. From this study, we provide internet shoppingmall managers with the implications for improvement of service quality, measuring quality of e-service, providing design characteristics for customer-oriented service quality.

  • PDF