• Title/Summary/Keyword: Distribution Information

Search Result 11,984, Processing Time 0.034 seconds

Efficient Execution of Range Mosaic Queries (범위 모자이크 질의의 효율적인 수행)

  • Hong, Seok-Jin;Bae, Jin-Uk;Lee, Suk-Ho
    • Journal of KIISE:Databases
    • /
    • v.32 no.5
    • /
    • pp.487-497
    • /
    • 2005
  • A range mosaic query returns distribution of data within the query region as a pattern of mosaic, whereas a range aggregate query returns a single aggregate value of data within the query region. The range mosaic query divides a query region by a multi-dimensional grid, and calculates aggregate values of grid cells. In this paper, we propose a new type of query, range mosaic query and a new operator, mosaic-by, with which the range mosaic queries can be represented. In addition, we suggest efficient algorithms for processing range mosaic queries using an aggregate R-tree. The algorithm that we present computes aggregate results of every mosaic grid cell by one time traversal of the aggregate R-tree, and efficiently executes the queries with only a small number of node accesses by using the aggregate values of the aggregate R-tree. Our experimental study shows that the range mosaic query algorithm is reliable in terms of performance for several synthetic datasets and a real-world dataset.

A Case of Establishing Robo-advisor Strategy through Parameter Optimization (금융 지표와 파라미터 최적화를 통한 로보어드바이저 전략 도출 사례)

  • Kang, Mincheal;Lim, Gyoo Gun
    • Journal of Information Technology Services
    • /
    • v.19 no.2
    • /
    • pp.109-124
    • /
    • 2020
  • Facing the 4th Industrial Revolution era, researches on artificial intelligence have become active and attempts have been made to apply machine learning in various fields. In the field of finance, Robo Advisor service, which analyze the market, make investment decisions and allocate assets instead of people, are rapidly expanding. The stock price prediction using the machine learning that has been carried out to date is mainly based on the prediction of the market index such as KOSPI, and utilizes technical data that is fundamental index or price derivative index using financial statement. However, most researches have proceeded without any explicit verification of the prediction rate of the learning data. In this study, we conducted an experiment to determine the degree of market prediction ability of basic indicators, technical indicators, and system risk indicators (AR) used in stock price prediction. First, we set the core parameters for each financial indicator and define the objective function reflecting the return and volatility. Then, an experiment was performed to extract the sample from the distribution of each parameter by the Markov chain Monte Carlo (MCMC) method and to find the optimum value to maximize the objective function. Since Robo Advisor is a commodity that trades financial instruments such as stocks and funds, it can not be utilized only by forecasting the market index. The sample for this experiment is data of 17 years of 1,500 stocks that have been listed in Korea for more than 5 years after listing. As a result of the experiment, it was possible to establish a meaningful trading strategy that exceeds the market return. This study can be utilized as a basis for the development of Robo Advisor products in that it includes a large proportion of listed stocks in Korea, rather than an experiment on a single index, and verifies market predictability of various financial indicators.

A Distributed Power Control Algorithm for Data Load Balancing with Coverage in Dynamic Femtocell Networks (다이나믹 펨토셀 네트워크에서 커버리지와 데이터 부하 균형을 고려한 기지국의 파워 조절 분산 알고리즘)

  • Shin, Donghoon;Choi, Sunghee
    • KIISE Transactions on Computing Practices
    • /
    • v.22 no.2
    • /
    • pp.101-106
    • /
    • 2016
  • A femtocell network has been attracting attention as a promising solution for providing high data rate transmission over the conventional cellular network in an indoor environment. In this paper, we propose a distributed power control algorithm considering both indoor coverage and data load balancing in the femtocell network. As data traffic varies by time and location according to user distribution, each femto base station suffers from an unbalanced data load, which may degrade network performance. To distribute the data load, the base stations are required to adjust their transmission power dynamically. Since there are a number of base stations in practice, we propose a distributed power control algorithm. In addition, we propose the simple algorithm to detect the faulty base station and to recover coverage. We also explain how to insert a new base station into a deployed network. We present the simulation results to evaluate the proposed algorithms.

Improved Object Recognition using Wavelet Transform & Histogram Equalization in the variable illumination (다양한 조명하에서 웨이블렛 변환과 히스토그램 평활화를 이용한 개선된 물체인식)

  • Kim Jae-Nam;Jung Byeong-Soo;Kim Byung-Ki
    • The KIPS Transactions:PartD
    • /
    • v.13D no.2 s.105
    • /
    • pp.287-292
    • /
    • 2006
  • There are two problems associated with the existing principal component analysis, which is regarded as the most effective in object recognition technology. First, it brings about an increase in the volume of calculations in proportion to the square of image size. Second, it gives rise to a decrease in accuracy according to illumination changes. In order to solve these problems, this paper proposes wavelet transformation and histogram equalization. Wavelet transformation solves the first problem by using the images of low resolution. To solve the second problem the histogram equalization enlarges the contrast of images and widens the distribution of brightness values. The proposed technology improves recognition rate by minimizing the effect of illumination change. It also speeds up the processing and reduces its area by wavelet transformation.

Ensemble Learning of Region Based Classifiers (지역 기반 분류기의 앙상블 학습)

  • Choi, Sung-Ha;Lee, Byung-Woo;Yang, Ji-Hoon
    • The KIPS Transactions:PartB
    • /
    • v.14B no.4
    • /
    • pp.303-310
    • /
    • 2007
  • In machine learning, the ensemble classifier that is a set of classifiers have been introduced for higher accuracy than individual classifiers. We propose a new ensemble learning method that employs a set of region based classifiers. To show the performance of the proposed method. we compared its performance with that of bagging and boosting, which ard existing ensemble methods. Since the distribution of data can be different in different regions in the feature space, we split the data and generate classifiers based on each region and apply a weighted voting among the classifiers. We used 11 data sets from the UCI Machine Learning Repository to compare the performance of our new ensemble method with that of individual classifiers as well as existing ensemble methods such as bagging and boosting. As a result, we found that our method produced improved performance, particularly when the base learner is Naive Bayes or SVM.

Integrity Authentication Algorithm of JPEG Compressed Images through Reversible Watermarking (가역 워터마킹 기술을 통한 JPEG 압축 영상의 무결성 인증 알고리즘)

  • Jo, Hyun-Wu;Yeo, Dong-Gyu;Lee, Hae-Yeoun
    • The KIPS Transactions:PartB
    • /
    • v.19B no.2
    • /
    • pp.83-92
    • /
    • 2012
  • Multimedia contents can be copied and manipulated without quality degradation. Therefore, they are vulnerable to digital forgery and illegal distribution. In these days, with increasing the importance of multimedia security, various multimedia security techniques are studied. In this paper, we propose a content authentication algorithm based on reversible watermarking which supports JPEG compression commonly used for multimedia contents. After splitting image blocks, a specific authentication code for each block is extracted and embedded into the quantized coefficients on JPEG compression which are preserved against lossy processing. At a decoding process, the watermarked JPEG image is authenticated by extracting the embedded code and restored to have the original image quality. To evaluate the performance of the proposed algorithm, we analyzed image quality and compression ratio on various test images. The average PSNR value and compression ratio of the watermarked JPEG image were 33.13dB and 90.65%, respectively, whose difference with the standard JPEG compression were 2.44dB and 1.63%.

Region Segmentation from MR Brain Image Using an Ant Colony Optimization Algorithm (개미 군집 최적화 알고리즘을 이용한 뇌 자기공명 영상의 영역분할)

  • Lee, Myung-Eun;Kim, Soo-Hyung;Lim, Jun-Sik
    • The KIPS Transactions:PartB
    • /
    • v.16B no.3
    • /
    • pp.195-202
    • /
    • 2009
  • In this paper, we propose the regions segmentation method of the white matter and the gray matter for brain MR image by using the ant colony optimization algorithm. Ant Colony Optimization (ACO) is a new meta heuristics algorithm to solve hard combinatorial optimization problem. This algorithm finds the expected pixel for image as the real ant finds the food from nest to food source. Then ants deposit pheromone on the pixels, and the pheromone will affect the motion of next ants. At each iteration step, ants will change their positions in the image according to the transition rule. Finally, we can obtain the segmentation results through analyzing the pheromone distribution in the image. We compared the proposed method with other threshold methods, viz. the Otsu' method, the genetic algorithm, the fuzzy method, and the original ant colony optimization algorithm. From comparison results, the proposed method is more exact than other threshold methods for the segmentation of specific region structures in MR brain image.

An Estimation of Risky Module using SVM (SVM을 이용한 위험모듈 예측)

  • Kim, Young-Mi;Jeong, Choong-Heui;Kim, Hyeon-Soo
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.15 no.6
    • /
    • pp.435-439
    • /
    • 2009
  • Software used in safety-critical system must have high dependability. Software testing and V&V (Verification and Validation) activities are very important for assuring high software quality. If we can predict the risky modules of safety-critical software, we can focus testing activities and regulation activities more efficiently such as resource distribution. In this paper, we classified the estimated risk class which can be used for deep testing and V&V. We predicted the risk class for each module using support vector machines. We can consider that the modules classified to risk class 5 and 4 are more risky than others relatively. For all classification error rates, we expect that the results can be useful and practical for software testing, V&V, and activities for regulatory reviews.

An Adaptive Storage System for Enhancing Data Reliability in Solar-powered Sensor Networks (태양 에너지 기반 센서 네트워크에서 데이터의 안정성을 향상시키기 위한 적응형 저장 시스템)

  • Noh, Dong-Kun
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.36 no.5
    • /
    • pp.360-370
    • /
    • 2009
  • Using solar power in wireless sensor networks requires a different approach to energy optimization from networks with battery-based nodes. Solar energy is an inexhaustible supply which can potentially allow a system to run forever, but there are several issues to be considered such as the uncertainty of energy supply and the constraint of rechargeable battery capacity. In this paper, we present SolarSS: a reliable storage system for solar-powered sensor networks, which provides a set of functions, in separate layers, such as sensory data collection, replication to prevent failure-induced data loss, and storage balancing to prevent depletion-induced data loss. SolarSS adapts the level of layers activated dynamically depending on solar energy availability, and provides an efficient resource allocation and data distribution scheme to minimize data loss.

Dynamic Data Cubes Over Data Streams (데이타 스트림에서 동적 데이타 큐브)

  • Seo, Dae-Hong;Yang, Woo-Sock;Lee, Won-Suk
    • Journal of KIISE:Databases
    • /
    • v.35 no.4
    • /
    • pp.319-332
    • /
    • 2008
  • Data cube, which is multi-dimensional data model, have been successfully applied in many cases of multi-dimensional data analysis, and is still being researched to be applied in data stream analysis. Data stream is being generated in real-time, incessant, immense, and volatile manner. The distribution characteristics of data arc changing rapidly due to those characteristics, so the primary rule of handling data stream is to check once and dispose it. For those characteristics, users are more interested in high support attribute values observed rather than the entire attribute values over data streams. This paper propose dynamic data cube for applying data cube to data stream environment. Dynamic data cube specify user's interested area by the support ratio of attribute value, and dynamically manage the attribute values by grouping each other. By doing this it reduce the memory usage and process time. And it can efficiently shows or emphasize user's interested area by increasing the granularity for attributes that have higher support. We perform experiments to verify how efficiently dynamic data cube works in limited memory usage.