• Title/Summary/Keyword: Computing amount

Search Result 687, Processing Time 0.022 seconds

XML Fragmentation for Resource-Efficient Query Processing over XML Fragment Stream (자원 효율적인 XML 조각 스트림 질의 처리를 위한 XML 분할)

  • Kim, Jin;Kang, Hyun-Chul
    • The KIPS Transactions:PartD
    • /
    • v.16D no.1
    • /
    • pp.27-42
    • /
    • 2009
  • In realizing ubiquitous computing, techniques of efficiently using the limited resource at client such as mobile devices are required. With a mobile device with limited amount of memory, the techniques of XML stream query processing should be employed to process queries over a large volume of XML data. Recently, several techniques were proposed which fragment XML documents into XML fragments and stream them for query processing at client. During query processing, there could be great difference in resource usage (query processing time and memory usage) depending on how the source XML documents are fragmented. As such, an efficient fragmentation technique is needed. In this paper, we propose an XML fragmentation technique whereby resource efficiency in query processing at client could be enhanced. For this, we first present a cost model of query processing over XML fragment stream. Then, we propose an algorithm for resource-efficient XML fragmentation. Through implementation and experiments, we showed that our fragmentation technique outperformed previous techniques both in processing time and memory usage. The contribution of this paper is to have made the techniques of query processing over XML fragment stream more feasible for practical use.

A Data Aggregation Scheme for Enhancing the Efficiency of Data Aggregation and Correctness in Wireless Sensor Networks (무선 센서 네트워크에서 데이터 수집의 효율성 및 정확성 향상을 위한 데이터 병합기법)

  • Kim, Hyun-Tae;Yu, Tae-Young;Jung, Kyu-Su;Jeon, Yeong-Bae;Ra, In-Ho
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.16 no.5
    • /
    • pp.531-536
    • /
    • 2006
  • Recently, many of researchers have been studied in data processing oriented middleware for wireless sensor networks with the rapid advances on sensor and wireless communication technologies. In a wireless sensor network, a middleware should handle the data loss problem at an intermediate sensor node caused by instantaneous data burstness to support efficient processing and fast delivering of the sensing data. To handle this problem, a simple data discarding or data compressing policy for reducing the total amount of data to be transferred is typically used. But, data discarding policy decreases the correctness of a collected data, in other hand, data compressing policy requires additional processing overhead with the high complexity of the given algorithm. In this paper, it proposes a data-average method for enhancing the efficiency of data aggregation and correctness where the sensed data should be delivered only with the limited computing power and energy resource. With the proposed method, unnecessary data transfer of the overlapped data is eliminated and data correctness is enhanced by using the proposed averaging scheme when an instantaneous data burstness is occurred. Finally, with the TOSSTM simulation results on TinyBB, we show that the correctness of the transferred data is enhanced.

A Study on Analysis of national R&D research trends for Artificial Intelligence using LDA topic modeling (LDA 토픽모델링을 활용한 인공지능 관련 국가R&D 연구동향 분석)

  • Yang, MyungSeok;Lee, SungHee;Park, KeunHee;Choi, KwangNam;Kim, TaeHyun
    • Journal of Internet Computing and Services
    • /
    • v.22 no.5
    • /
    • pp.47-55
    • /
    • 2021
  • Analysis of research trends in specific subject areas is performed by examining related topics and subject changes by using topic modeling techniques through keyword extraction for most of the literature information (paper, patents, etc.). Unlike existing research methods, this paper extracts topics related to the research topic using the LDA topic modeling technique for the project information of national R&D projects provided by the National Science and Technology Knowledge Information Service (NTIS) in the field of artificial intelligence. By analyzing these topics, this study aims to analyze research topics and investment directions for national R&D projects. NTIS provides a vast amount of national R&D information, from information on tasks carried out through national R&D projects to research results (thesis, patents, etc.) generated through research. In this paper, the search results were confirmed by performing artificial intelligence keywords and related classification searches in NTIS integrated search, and basic data was constructed by downloading the latest three-year project information. Using the LDA topic modeling library provided by Python, related topics and keywords were extracted and analyzed for basic data (research goals, research content, expected effects, keywords, etc.) to derive insights on the direction of research investment.

Implementation of the Large-scale Data Signature System Using Hash Tree Replication Approach (해시 트리 기반의 대규모 데이터 서명 시스템 구현)

  • Park, Seung Kyu
    • Convergence Security Journal
    • /
    • v.18 no.1
    • /
    • pp.19-31
    • /
    • 2018
  • As the ICT technologies advance, the unprecedently large amount of digital data is created, transferred, stored, and utilized in every industry. With the data scale extension and the applying technologies advancement, the new services emerging from the use of large scale data make our living more convenient and useful. But the cybercrimes such as data forgery and/or change of data generation time are also increasing. For the data security against the cybercrimes, the technology for data integrity and the time verification are necessary. Today, public key based signature technology is the most commonly used. But a lot of costly system resources and the additional infra to manage the certificates and keys for using it make it impractical to use in the large-scale data environment. In this research, a new and far less system resources consuming signature technology for large scale data, based on the Hash Function and Merkle tree, is introduced. An improved method for processing the distributed hash trees is also suggested to mitigate the disruptions by server failures. The prototype system was implemented, and its performance was evaluated. The results show that the technology can be effectively used in a variety of areas like cloud computing, IoT, big data, fin-tech, etc., which produce a large-scale data.

  • PDF

Real-time Watermarking Algorithm using Multiresolution Statistics for DWT Image Compressor (DWT기반 영상 압축기의 다해상도의 통계적 특성을 이용한 실시간 워터마킹 알고리즘)

  • 최순영;서영호;유지상;김대경;김동욱
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.13 no.6
    • /
    • pp.33-43
    • /
    • 2003
  • In this paper, we proposed a real-time watermarking algorithm to be combined and to work with a DWT(Discrete Wavelet Transform)-based image compressor. To reduce the amount of computation in selecting the watermarking positions, the proposed algorithm uses a pre-established look-up table for critical values, which was established statistically by computing the correlation according to the energy values of the corresponding wavelet coefficients. That is, watermark is embedded into the coefficients whose values are greater than the critical value in the look-up table which is searched on the basis of the energy values of the corresponding level-1 subband coefficients. Therefore, the proposed algorithm can operate in a real-time because the watermarking process operates in parallel with the compression procession without affecting the operation of the image compression. Also it improved the property of losing the watermark and the efficiency of image compression by watermark inserting, which results from the quantization and Huffman-Coding during the image compression. Visual recognizable patterns such as binary image were used as a watermark The experimental results showed that the proposed algorithm satisfied the properties of robustness and imperceptibility that are the major conditions of watermarking.

Survival network based Android Authorship Attribution considering overlapping tolerance (중복 허용 범위를 고려한 서바이벌 네트워크 기반 안드로이드 저자 식별)

  • Hwang, Cheol-hun;Shin, Gun-Yoon;Kim, Dong-Wook;Han, Myung-Mook
    • Journal of Internet Computing and Services
    • /
    • v.21 no.6
    • /
    • pp.13-21
    • /
    • 2020
  • The Android author identification study can be interpreted as a method for revealing the source in a narrow range, but if viewed in a wide range, it can be interpreted as a study to gain insight to identify similar works through known works. The problem found in the Android author identification study is that it is an important code on the Android system, but it is difficult to find the important feature of the author due to the meaningless codes. Due to this, legitimate codes or behaviors were also incorrectly defined as malicious codes. To solve this, we introduced the concept of survival network to solve the problem by removing the features found in various Android apps and surviving unique features defined by authors. We conducted an experiment comparing the proposed framework with a previous study. From the results of experiments on 440 authors' identified apps, we obtained a classification accuracy of up to 92.10%, and showed a difference of up to 3.47% from the previous study. It used a small amount of learning data, but because it used unique features without duplicate features for each author, it was considered that there was a difference from previous studies. In addition, even in comparative experiments with previous studies according to the feature definition method, the same accuracy can be shown with a small number of features, and this can be seen that continuously overlapping meaningless features can be managed through the concept of a survival network.

Research on illegal copyright distributor tracking and profiling technology (불법저작물 유포자 행위분석 프로파일링 기술 연구)

  • Kim, Jin-gang;Hwang, Chan-woong;Lee, Tae-jin
    • Journal of Internet Computing and Services
    • /
    • v.22 no.3
    • /
    • pp.75-83
    • /
    • 2021
  • With the development of the IT industry and the increase of cultural activities, the demand for works increases, and they can be used easily and conveniently in an online environment. Accordingly, copyright infringement is seriously occurring due to the ease of copying and distribution of works. Some special types of Online Service Providers (OSP) use filtering-based technology to protect copyrights, but they can easily bypass them, and there are limits to blocking all illegal works, making it increasingly difficult to protect copyrights. Recently, most of the distributors of illegal works are a certain minority, and profits are obtained by distributing illegal works through many OSP and majority ID. In this paper, we propose a profiling technique for heavy uploader, which is a major analysis target based on illegal works. Creates a feature containing information on overall illegal works and identifies major heavy uploader. Among these, clustering technology is used to identify heavy uploader that are presumed to be the same person. In addition, heavy uploaders with high priority can be analyzed through illegal work Distributor tracking and behavior analysis. In the future, it is expected that copyright damage will be minimized by identifying and blocking heavy uploader that distribute a large amount of illegal works.

Development of a modified model for predicting cabbage yield based on soil properties using GIS (GIS를 이용한 토양정보 기반의 배추 생산량 예측 수정모델 개발)

  • Choi, Yeon Oh;Lee, Jaehyeon;Sim, Jae Hoo;Lee, Seung Woo
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.40 no.5
    • /
    • pp.449-456
    • /
    • 2022
  • This study proposes a deep learning algorithm to predict crop yield using GIS (Geographic Information System) to extract soil properties from Soilgrids and soil suitability class maps. The proposed model modified the structure of a published CNN-RNN (Convolutional Neural Network-Recurrent Neural Network) based crop yield prediction model suitable for the domestic crop environment. The existing model has two characteristics. The first is that it replaces the original yield with the average yield of the year, and the second is that it trains the data of the predicted year. The new model uses the original field value to ensure accuracy, and the network structure has been improved so that it can train only with data prior to the year to be predicted. The proposed model predicted the yield per unit area of autumn cabbage for kimchi by region based on weather, soil, soil suitability classes, and yield data from 1980 to 2020. As a result of computing and predicting data for each of the four years from 2018 to 2021, the error amount for the test data set was about 10%, enabling accurate yield prediction, especially in regions with a large proportion of total yield. In addition, both the proposed model and the existing model show that the error gradually decreases as the number of years of training data increases, resulting in improved general-purpose performance as the number of training data increases.

DNN Model for Calculation of UV Index at The Location of User Using Solar Object Information and Sunlight Characteristics (태양객체 정보 및 태양광 특성을 이용하여 사용자 위치의 자외선 지수를 산출하는 DNN 모델)

  • Ga, Deog-hyun;Oh, Seung-Taek;Lim, Jae-Hyun
    • Journal of Internet Computing and Services
    • /
    • v.23 no.2
    • /
    • pp.29-35
    • /
    • 2022
  • UV rays have beneficial or harmful effects on the human body depending on the degree of exposure. An accurate UV information is required for proper exposure to UV rays per individual. The UV rays' information is provided by the Korea Meteorological Administration as one component of daily weather information in Korea. However, it does not provide an accurate UVI at the user's location based on the region's Ultraviolet index. Some operate measuring instrument to obtain an accurate UVI, but it would be costly and inconvenient. Studies which assumed the UVI through environmental factors such as solar radiation and amount of cloud have been introduced, but those studies also could not provide service to individual. Therefore, this paper proposes a deep learning model to calculate UVI using solar object information and sunlight characteristics to provide an accurate UVI at individual location. After selecting the factors, which were considered as highly correlated with UVI such as location and size and illuminance of sun and which were obtained through the analysis of sky images and solar characteristics data, a data set for DNN model was constructed. A DNN model that calculates the UVI was finally realized by entering the solar object information and sunlight characteristics extracted through Mask R-CNN. In consideration of the domestic UVI recommendation standards, it was possible to accurately calculate UVI within the range of MAE 0.26 compared to the standard equipment in the performance evaluation for days with UVI above and below 8.

Cyber attack group classification based on MITRE ATT&CK model (MITRE ATT&CK 모델을 이용한 사이버 공격 그룹 분류)

  • Choi, Chang-hee;Shin, Chan-ho;Shin, Sung-uk
    • Journal of Internet Computing and Services
    • /
    • v.23 no.6
    • /
    • pp.1-13
    • /
    • 2022
  • As the information and communication environment develops, the environment of military facilities is also development remarkably. In proportion to this, cyber threats are also increasing, and in particular, APT attacks, which are difficult to prevent with existing signature-based cyber defense systems, are frequently targeting military and national infrastructure. It is important to identify attack groups for appropriate response, but it is very difficult to identify them due to the nature of cyber attacks conducted in secret using methods such as anti-forensics. In the past, after an attack was detected, a security expert had to perform high-level analysis for a long time based on the large amount of evidence collected to get a clue about the attack group. To solve this problem, in this paper, we proposed an automation technique that can classify an attack group within a short time after detection. In case of APT attacks, compared to general cyber attacks, the number of attacks is small, there is not much known data, and it is designed to bypass signature-based cyber defense techniques. As an attack model, we used MITRE ATT&CK® which modeled many parts of cyber attacks. We design an impact score considering the versatility of the attack techniques and proposed a group similarity score based on this. Experimental results show that the proposed method classified the attack group with a 72.62% probability based on Top-5 accuracy.