• Title/Summary/Keyword: histogram data

Search Result 488, Processing Time 0.026 seconds

Efficient Storage Structures for a Stock Investment Recommendation System (주식 투자 추천 시스템을 위한 효율적인 저장 구조)

  • Ha, You-Min;Kim, Sang-Wook;Park, Sang-Hyun;Lim, Seung-Hwan
    • The KIPS Transactions:PartD
    • /
    • v.16D no.2
    • /
    • pp.169-176
    • /
    • 2009
  • Rule discovery is an operation that discovers patterns frequently occurring in a given database. Rule discovery makes it possible to find useful rules from a stock database, thereby recommending buying or selling times to stock investors. In this paper, we discuss storage structures for efficient processing of queries in a system that recommends stock investments. First, we propose five storage structures for efficient recommending of stock investments. Next, we discuss their characteristics, advantages, and disadvantages. Then, we verify their performances by extensive experiments with real-life stock data. The results show that the histogram-based structure improves the query performance of the previous one up to about 170 times.

A Study on the Hangeul confusion Character Recognition Using Fractal Dimensions and Attactors (프랙탈 차원과 어트랙트를 이용한 한글 혼동 문자 인식에 관한 연구)

  • Son, Yeong-U
    • The Transactions of the Korea Information Processing Society
    • /
    • v.6 no.7
    • /
    • pp.1825-1831
    • /
    • 1999
  • In this paper, to reduce misrecognized characters, we propose the new method that extract features from character to apply to the character recognition using features from character to apply to the character recognition using fractal dimensions and attractors. Firstly, to reduce the load of recognizer we classify the characters. For the classified character, we extract the features for Box-counting dimensions. Natural Measures, Information dimensions then recognize characters. With histogram, we generate attractors and calculate dimensions from attractors. Then we recognize characters with dimensions of characters and attractors. An experimental result that the overall recognition rates for the training data and testing data are 96.03% and 91.74% respectively. This result shows the effectiveness of proposed method.

  • PDF

An Adaptive Face Recognition System Based on a Novel Incremental Kernel Nonparametric Discriminant Analysis

  • SOULA, Arbia;SAID, Salma BEN;KSANTINI, Riadh;LACHIRI, Zied
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.13 no.4
    • /
    • pp.2129-2147
    • /
    • 2019
  • This paper introduces an adaptive face recognition method based on a Novel Incremental Kernel Nonparametric Discriminant Analysis (IKNDA) that is able to learn through time. More precisely, the IKNDA has the advantage of incrementally reducing data dimension, in a discriminative manner, as new samples are added asynchronously. Thus, it handles dynamic and large data in a better way. In order to perform face recognition effectively, we combine the Gabor features and the ordinal measures to extract the facial features that are coded across local parts, as visual primitives. The variegated ordinal measures are extraught from Gabor filtering responses. Then, the histogram of these primitives, across a variety of facial zones, is intermingled to procure a feature vector. This latter's dimension is slimmed down using PCA. Finally, the latter is treated as a facial vector input for the advanced IKNDA. A comparative evaluation of the IKNDA is performed for face recognition, besides, for other classification endeavors, in a decontextualized evaluation schemes. In such a scheme, we compare the IKNDA model to some relevant state-of-the-art incremental and batch discriminant models. Experimental results show that the IKNDA outperforms these discriminant models and is better tool to improve face recognition performance.

Development of the KASS Multipath Assessment Tool

  • Cho, SungLyong;Lee, ByungSeok;Choi, JongYeoun;Nam, GiWook
    • Journal of Positioning, Navigation, and Timing
    • /
    • v.7 no.4
    • /
    • pp.267-275
    • /
    • 2018
  • The reference stations in a satellite-based augmentation system (SBAS) collect raw data from global navigation satellite system (GNSS) to generate correction and integrity information. The multipath signals degrade GNSS raw data quality and have adverse effects on the SBAS performance. The currently operating SBASs (WAAS and EGNOS, etc.) survey existing commercial equipment to perform multipath assessment around the antennas. For the multi-path assessment, signal power of GNSS and multipath at the MEDLL receiver of NovAtel were estimated and the results were replicated by a ratio of signal power estimated at NovAtel Multipath Assessment Tool (MAT). However, the same experiment environment used in existing systems cannot be configured in reference stations in Korean augmentation satellite system (KASS) due to the discontinued model of MAT and MEDLL receivers used in the existing systems. This paper proposes a test environment for multipath assessment around the antennas in KASS Multipath Assessment Tool (K-MAT) for multipath assessment. K-MAT estimates a multipath error contained in the code pseudorange using linear combination between the measurements and replicates the results through polar plot and histogram for multipath assessment using the estimated values.

An Improved Steganography Method Based on Least-Significant-Bit Substitution and Pixel-Value Differencing

  • Liu, Hsing-Han;Su, Pin-Chang;Hsu, Meng-Hua
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.14 no.11
    • /
    • pp.4537-4556
    • /
    • 2020
  • This research was based on the study conducted by Khodaei et al. (2012), namely, the least-significant-bit (LSB) substitution combined with the pixel-value differencing (PVD) steganography, and presented an improved irreversible image steganography method. Such a method was developed through integrating the improved LSB substitution with the modulus function-based PVD steganography to increase steganographic capacity of the original technique while maintaining the quality of images. It partitions the cover image into non-overlapped blocks, each of which consists of 3 consecutive pixels. The 2nd pixel represents the base, in which secret data are embedded by using the 3-bit LSB substitution. Each of the other 2 pixels is paired with the base respectively for embedding secret data by using an improved modulus PVD method. The experiment results showed that the method can greatly increase steganographic capacity in comparison with other PVD-based techniques (by a maximum amount of 135%), on the premise that the quality of images is maintained. Last but not least, 2 security analyses, the pixel difference histogram (PDH) and the content-selective residual (CSR) steganalysis were performed. The results indicated that the method is capable of preventing the detection of the 2 common techniques.

A study on road damage detection for safe driving of autonomous vehicles based on OpenCV and CNN

  • Lee, Sang-Hyun
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.14 no.2
    • /
    • pp.47-54
    • /
    • 2022
  • For safe driving of autonomous vehicles, road damage detection is very important to lower the potential risk. In order to ensure safety while an autonomous vehicle is driving on the road, technology that can cope with various obstacles is required. Among them, technology that recognizes static obstacles such as poor road conditions as well as dynamic obstacles that may be encountered while driving, such as crosswalks, manholes, hollows, and speed bumps, is a priority. In this paper, we propose a method to extract similarity of images and find damaged road images using OpenCV image processing and CNN algorithm. To implement this, we trained a CNN model using 280 training datasheets and 70 test datasheets out of 350 image data. As a result of training, the object recognition processing speed and recognition speed of 100 images were tested, and the average processing speed was 45.9 ms, the average recognition speed was 66.78 ms, and the average object accuracy was 92%. In the future, it is expected that the driving safety of autonomous vehicles will be improved by using technology that detects road obstacles encountered while driving.

Transition-based Data Decoding for Optical Camera Communications Using a Rolling Shutter Camera

  • Kim, Byung Wook;Lee, Ji-Hwan;Jung, Sung-Yoon
    • Current Optics and Photonics
    • /
    • v.2 no.5
    • /
    • pp.422-430
    • /
    • 2018
  • Rolling shutter operation of CMOS cameras can be utilized in optical camera communications in order to transmit data from an LED to mobile devices such as smart-phones. From temporally modulated light, a spatial flicker pattern is obtained in the captured image, and this is used for signal recovery. Due to the degradation of rolling shutter images caused by light smear, motion blur, and focus blur, the conventional decoding schemes for rolling shutter cameras based on the pattern width for 'OFF' and 'ON' cannot guarantee robust communications performance for practical uses. Aside from conventional techniques, such as polynomial fitting, histogram equalization can be used for blurry light mitigation, but it requires additional computation abilities resulting in burdens on mobile devices. This paper proposes a transition-based decoding scheme for rolling shutter cameras in order to offer simple and robust data decoding in the presence of image degradation. Based on the designed synchronization pulse and modulated data symbols according to the LED dimming level, the decoding process is conducted by observing the transition patterns of two sequential symbol pulses. For this, the extended symbol pulse caused by consecutive symbol pulses with the same level determines whether the second pulse should be included for the next bit decoding or not. The proposed method simply identifies the transition patterns of sequential symbol pulses other than the pattern width of 'OFF' and 'ON' for data decoding, and thus, it is simpler and more accurate. Experimental results ensured that the transition-based decoding scheme is robust even in the presence of blurry lights in the captured image at various dimming levels

An Efficient Bitmap Indexing Method for Multimedia Data Reflecting the Characteristics of MPEG-7 Visual Descriptors (MPEG-7 시각 정보 기술자의 특성을 반영한 효율적인 멀티미디어 데이타 비트맵 인덱싱 방법)

  • Jeong Jinguk;Nang Jongho
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.32 no.1
    • /
    • pp.9-20
    • /
    • 2005
  • Recently, the MPEG-7 standard a multimedia content description standard is wide]y used for content based image/video retrieval systems. However, since the descriptors standardized in MPEG-7 are usually multidimensional and the problem called 'Curse of dimensionality', previously proposed indexing methods(for example, multidimensional indexing methods, dimensionality reduction methods, filtering methods, and so on) could not be used to effectively index the multimedia database represented in MPEG-7. This paper proposes an efficient multimedia data indexing mechanism reflecting the characteristics of MPEG-7 visual descriptors. In the proposed indexing mechanism, the descriptor is transformed into a histogram of some attributes. By representing the value of each bin as a binary number, the histogram itself that is a visual descriptor for the object in multimedia database could be represented as a bit string. Bit strings for all objects in multimedia database are collected to form an index file, bitmap index, in the proposed indexing mechanism. By XORing them with the descriptors for query object, the candidate solutions for similarity search could be computed easily and they are checked again with query object to precisely compute the similarity with exact metric such as Ll-norm. These indexing and searching mechanisms are efficient because the filtering process is performed by simple bit-operation and it reduces the search space dramatically. Upon experimental results with more than 100,000 real images, the proposed indexing and searching mechanisms are about IS times faster than the sequential searching with more than 90% accuracy.

Pedestrian Classification using CNN's Deep Features and Transfer Learning (CNN의 깊은 특징과 전이학습을 사용한 보행자 분류)

  • Chung, Soyoung;Chung, Min Gyo
    • Journal of Internet Computing and Services
    • /
    • v.20 no.4
    • /
    • pp.91-102
    • /
    • 2019
  • In autonomous driving systems, the ability to classify pedestrians in images captured by cameras is very important for pedestrian safety. In the past, after extracting features of pedestrians with HOG(Histogram of Oriented Gradients) or SIFT(Scale-Invariant Feature Transform), people classified them using SVM(Support Vector Machine). However, extracting pedestrian characteristics in such a handcrafted manner has many limitations. Therefore, this paper proposes a method to classify pedestrians reliably and effectively using CNN's(Convolutional Neural Network) deep features and transfer learning. We have experimented with both the fixed feature extractor and the fine-tuning methods, which are two representative transfer learning techniques. Particularly, in the fine-tuning method, we have added a new scheme, called M-Fine(Modified Fine-tuning), which divideslayers into transferred parts and non-transferred parts in three different sizes, and adjusts weights only for layers belonging to non-transferred parts. Experiments on INRIA Person data set with five CNN models(VGGNet, DenseNet, Inception V3, Xception, and MobileNet) showed that CNN's deep features perform better than handcrafted features such as HOG and SIFT, and that the accuracy of Xception (threshold = 0.5) isthe highest at 99.61%. MobileNet, which achieved similar performance to Xception and learned 80% fewer parameters, was the best in terms of efficiency. Among the three transfer learning schemes tested above, the performance of the fine-tuning method was the best. The performance of the M-Fine method was comparable to or slightly lower than that of the fine-tuningmethod, but higher than that of the fixed feature extractor method.

A Study on Particular Abnormal Gait Using Accelerometer and Gyro Sensor (가속도센서와 각속도센서를 이용한 특정 비정상보행에 관한 연구)

  • Heo, Geun-Sub;Yang, Seung-Han;Lee, Sang-Ryong;Lee, Jong-Gyu;Lee, Choon-Young
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.29 no.11
    • /
    • pp.1199-1206
    • /
    • 2012
  • Recently, technologies to help the elderly or disabled people who have difficulty in walking are being developed. In order to develop these technologies, it is necessary to construct a system that gathers the gait data of people and analysis of these data is also important. In this research, we constructed the development of sensor system which consists of pressure sensor, three-axis accelerometer and two-axis gyro sensor. We used k-means clustering algorithm to classify the data for characterization, and then calculated the symmetry index with histogram which was produced from each cluster. We collected gait data from sensors attached on two subjects. The experiment was conducted for two kinds of gait status. One is walking with normal gait; the other is walking with abnormal gait (abnormal gait means that the subject walks by dragging the right leg intentionally). With the result from the analysis of acceleration component, we were able to confirm that the analysis technique of this data could be used to determine gait symmetry. In addition, by adding gyro components in the analysis, we could find that the symmetry index was appropriate to express symmetry better.