• Title/Summary/Keyword: Big data indexing

Search Result 21, Processing Time 0.032 seconds

On the performance of the hash based indexes for storing the position information of moving objects (이동체의 위치 정보를 저장하기 위한 해쉬 기반 색인의 성능 분석)

  • Jun, Bong-Gi
    • Journal of the Korea Society of Computer and Information
    • /
    • v.11 no.6 s.44
    • /
    • pp.9-17
    • /
    • 2006
  • Moving objects database systems manage a set of moving objects which changes its locations and directions continuously. The traditional spatial indexing scheme is not suitable for the moving objects because it aimed to manage static spatial data. Because the location of moving object changes continuously, there is problem that expense that the existent spatial index structure reconstructs index dynamically is overladen. In this paper, we analyzed the insertion/deletion costs for processing the movement of objects. The results of our extensive experiments show that the Dynamic Hashing Index outperforms the original R-tree and the fixed grid typically by a big margin.

  • PDF

Provenance and Validation from the Humanities to Automatic Acquisition of Semantic Knowledge and Machine Reading for News and Historical Sources Indexing/Summary

  • NANETTI, Andrea;LIN, Chin-Yew;CHEONG, Siew Ann
    • Asian review of World Histories
    • /
    • v.4 no.1
    • /
    • pp.125-132
    • /
    • 2016
  • This paper, as a conlcusion to this special issue, presents the future work that is being carried out at NTU Singapore in collaboration with Microsoft Research and Microsoft Azure for Research. For our research team the real frontier research in world histories starts when we want to use computers to structure historical information, model historical narratives, simulate theoretical large scale hypotheses, and incent world historians to use virtual assistants and/or engage them in teamwork using social media and/or seduce them with immersive spaces to provide new learning and sharing environments, in which new things can emerge and happen: "You do not know which will be the next idea. Just repeating the same things is not enough" (Carlo Rubbia, 1984 Nobel Price in Physics, at Nanyang Technological University on January 19, 2016).

Linked Data Indexing System for Big Data Processing on the Cloud System (빅데이터 활용을 위한 클라우드 기반의 링크드 데이터 인덱싱 시스템)

  • Lee, Mina;Jung, Jinuk;Kim, Eung-hee;Kim, Hong-gee
    • Annual Conference of KIPS
    • /
    • 2013.11a
    • /
    • pp.1596-1598
    • /
    • 2013
  • 2000년대 초반 등장한 시맨틱 웹 기술은 최근 재조명을 받고 있다. 이는 초기에 구축된 시맨틱 데이터와 최근에 구축하는 시맨틱 데이터의 양적 비교를 통해서도 알 수 있다. 그러나 기존의 시맨틱웹 기술은 대용량 데이터를 처리하는데 어려움이 많아, 이를 처리하기 위한 기술이 중요한 문제로 대두되고 있다. 본 논문에서는 앞에서 말한 바와 같이, 기존 RDF Repository의 대안으로, 다양한 데이터 베이스를 복합적으로 사용하였다. RDF 데이터를 효율적으로 처리하기 위해, NoSQL DB와 메모리 기반 관계형 DB를 활용하여 시스템을 구성하였다. 또한, 사용자가 이에 대한 별도의 지식 없이 기존의 SPARQL 질의를 그대로 사용하여, 원하는 결과를 얻을 수 있는 시스템을 제안한다.

RDBMS Based Efficient Method for Shortest Path Searching Over Large Graphs Using K-degree Index Table (대용량 그래프에서 k-차수 인덱스 테이블을 이용한 RDBMS 기반의 효율적인 최단 경로 탐색 기법)

  • Hong, Jihye;Han, Yongkoo;Lee, Young-Koo
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.3 no.5
    • /
    • pp.179-186
    • /
    • 2014
  • Current networks such as social network, web page link, traffic network are big data which have the large numbers of nodes and edges. Many applications such as social network services and navigation systems use these networks. Since big networks are not fit into the memory, existing in-memory based analysis techniques cannot provide high performance. Frontier-Expansion-Merge (FEM) framework for graph search operations using three corresponding operators in the relational database (RDB) context. FEM exploits an index table that stores pre-computed partial paths for efficient shortest path discovery. However, the index table of FEM has low hit ratio because the indices are determined by distances of indices rather than the possibility of containing a shortest path. In this paper, we propose an method that construct index table using high degree nodes having high hit ratio for efficient shortest path discovery. We experimentally verify that our index technique can support shortest path discovery efficiently in real-world datasets.

Efficient k-Nearest Neighbor Query Processing Method for a Large Location Data (대용량 위치 데이터에서 효율적인 k-최근접 질의 처리 기법)

  • Choi, Dojin;Lim, Jongtae;Yoo, Seunghun;Bok, Kyoungsoo;Yoo, Jaesoo
    • The Journal of the Korea Contents Association
    • /
    • v.17 no.8
    • /
    • pp.619-630
    • /
    • 2017
  • With the growing popularity of smart devices, various location based services have been providing to users. Recently, some location based social applications that combine social services and location based services have been emerged. The demands of a k-nearest neighbors(k-NN) query which finds k closest locations from a user location are increased in the location based social network services. In this paper, we propose an approximate k-NN query processing method for fast response time in a large number of users environments. The proposed method performs efficient stream processing using big data distributed processing technologies. In this paper, we also propose a modified grid index method for indexing a large amount of location data. The proposed query processing method first retrieves the related cells by considering a user movement. By doing so, it can make an approximate k results set. In order to show the superiority of the proposed method, we conduct various performance evaluations with the existing method.

Digital Competencies Required for Information Science Specialists at Saudi Universities

  • Yamani, Hanaa;AlHarthi, Ahmed;Elsigini, Waleed
    • International Journal of Computer Science & Network Security
    • /
    • v.21 no.2
    • /
    • pp.212-220
    • /
    • 2021
  • The objectives of this research were to identify the digital competencies required for information science specialists at Saudi universities and to examine whether there existed conspicuous differences in the standpoint of these specialists due to years of work experience with regard to the importance of these competencies. A descriptive analytical method was used to accomplish these objectives while extracting the required digital competency list and ascertaining its importance. The research sample comprised 24 experts in the field of information science from several universities in the Kingdom of Saudi Arabia. The participants in the sample were asked to complete a questionnaire prepared to acquire the pertinent data in the period between January 5, 2021 and January 20, 2021. The results reveal that the digital competencies required for information science specialists at Saudi universities encompass general features such as the ability to use computer, Internet, Web2, Web3, and smartphone applications, digital learning resource development, data processing (big data) and its sharing via the Internet, system analysis, dealing with multiple electronic indexing applications and learning management systems and its features, using electronic bibliographic control tools, artificial intelligence tools, cybersecurity system maintenance, ability to comprehend and use different programming languages, simulation, and augmented reality applications, and knowledge and skills for 3D printing. Furthermore, no statistically significant differences were observed between the mean ranks of scores of specialists with less than 10 years of practical experience and those with practical experience of 10 years or more with regard to conferring importance to digital competencies.

A Fast and Scalable Image Retrieval Algorithms by Leveraging Distributed Image Feature Extraction on MapReduce (MapReduce 기반 분산 이미지 특징점 추출을 활용한 빠르고 확장성 있는 이미지 검색 알고리즘)

  • Song, Hwan-Jun;Lee, Jin-Woo;Lee, Jae-Gil
    • Journal of KIISE
    • /
    • v.42 no.12
    • /
    • pp.1474-1479
    • /
    • 2015
  • With mobile devices showing marked improvement in performance in the age of the Internet of Things (IoT), there is demand for rapid processing of the extensive amount of multimedia big data. However, because research on image searching is focused mainly on increasing accuracy despite environmental changes, the development of fast processing of high-resolution multimedia data queries is slow and inefficient. Hence, we suggest a new distributed image search algorithm that ensures both high accuracy and rapid response by using feature extraction of distributed images based on MapReduce, and solves the problem of memory scalability based on BIRCH indexing. In addition, we conducted an experiment on the accuracy, processing time, and scalability of this algorithm to confirm its excellent performance.

The Performance Evaluation System for the Modern Pentathlon based on the Concept of Performance Analysis of Sport

  • Choi, Hyongjun;Han, Doryung
    • Journal of the Korea Society of Computer and Information
    • /
    • v.25 no.9
    • /
    • pp.117-123
    • /
    • 2020
  • This study intended to develop the Performance Evaluation system for the modern pentathlon such as an Olympic sporting event. The performance evaluation index system for the modern pentathlon was developed by Microsoft Excel 2016 with Visual Basic for Application that it is able to be understandable for the practical field in sport. Consequently, the system for the performance evaluation index is able to be developed within the concept of performance analysis of sport such as a notational analysis of sport. And the performance indicators for the performance evaluation index were selected by the skills to make successful outcomes. Finally, the simulation with big data gathering by the developed system would be requested that systematic reviews on successful outcomes in other sporting events would be necessary.

Stock-Index Invest Model Using News Big Data Opinion Mining (뉴스와 주가 : 빅데이터 감성분석을 통한 지능형 투자의사결정모형)

  • Kim, Yoo-Sin;Kim, Nam-Gyu;Jeong, Seung-Ryul
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.2
    • /
    • pp.143-156
    • /
    • 2012
  • People easily believe that news and stock index are closely related. They think that securing news before anyone else can help them forecast the stock prices and enjoy great profit, or perhaps capture the investment opportunity. However, it is no easy feat to determine to what extent the two are related, come up with the investment decision based on news, or find out such investment information is valid. If the significance of news and its impact on the stock market are analyzed, it will be possible to extract the information that can assist the investment decisions. The reality however is that the world is inundated with a massive wave of news in real time. And news is not patterned text. This study suggests the stock-index invest model based on "News Big Data" opinion mining that systematically collects, categorizes and analyzes the news and creates investment information. To verify the validity of the model, the relationship between the result of news opinion mining and stock-index was empirically analyzed by using statistics. Steps in the mining that converts news into information for investment decision making, are as follows. First, it is indexing information of news after getting a supply of news from news provider that collects news on real-time basis. Not only contents of news but also various information such as media, time, and news type and so on are collected and classified, and then are reworked as variable from which investment decision making can be inferred. Next step is to derive word that can judge polarity by separating text of news contents into morpheme, and to tag positive/negative polarity of each word by comparing this with sentimental dictionary. Third, positive/negative polarity of news is judged by using indexed classification information and scoring rule, and then final investment decision making information is derived according to daily scoring criteria. For this study, KOSPI index and its fluctuation range has been collected for 63 days that stock market was open during 3 months from July 2011 to September in Korea Exchange, and news data was collected by parsing 766 articles of economic news media M company on web page among article carried on stock information>news>main news of portal site Naver.com. In change of the price index of stocks during 3 months, it rose on 33 days and fell on 30 days, and news contents included 197 news articles before opening of stock market, 385 news articles during the session, 184 news articles after closing of market. Results of mining of collected news contents and of comparison with stock price showed that positive/negative opinion of news contents had significant relation with stock price, and change of the price index of stocks could be better explained in case of applying news opinion by deriving in positive/negative ratio instead of judging between simplified positive and negative opinion. And in order to check whether news had an effect on fluctuation of stock price, or at least went ahead of fluctuation of stock price, in the results that change of stock price was compared only with news happening before opening of stock market, it was verified to be statistically significant as well. In addition, because news contained various type and information such as social, economic, and overseas news, and corporate earnings, the present condition of type of industry, market outlook, the present condition of market and so on, it was expected that influence on stock market or significance of the relation would be different according to the type of news, and therefore each type of news was compared with fluctuation of stock price, and the results showed that market condition, outlook, and overseas news was the most useful to explain fluctuation of news. On the contrary, news about individual company was not statistically significant, but opinion mining value showed tendency opposite to stock price, and the reason can be thought to be the appearance of promotional and planned news for preventing stock price from falling. Finally, multiple regression analysis and logistic regression analysis was carried out in order to derive function of investment decision making on the basis of relation between positive/negative opinion of news and stock price, and the results showed that regression equation using variable of market conditions, outlook, and overseas news before opening of stock market was statistically significant, and classification accuracy of logistic regression accuracy results was shown to be 70.0% in rise of stock price, 78.8% in fall of stock price, and 74.6% on average. This study first analyzed relation between news and stock price through analyzing and quantifying sensitivity of atypical news contents by using opinion mining among big data analysis techniques, and furthermore, proposed and verified smart investment decision making model that could systematically carry out opinion mining and derive and support investment information. This shows that news can be used as variable to predict the price index of stocks for investment, and it is expected the model can be used as real investment support system if it is implemented as system and verified in the future.

A Semantic Text Model with Wikipedia-based Concept Space (위키피디어 기반 개념 공간을 가지는 시멘틱 텍스트 모델)

  • Kim, Han-Joon;Chang, Jae-Young
    • The Journal of Society for e-Business Studies
    • /
    • v.19 no.3
    • /
    • pp.107-123
    • /
    • 2014
  • Current text mining techniques suffer from the problem that the conventional text representation models cannot express the semantic or conceptual information for the textual documents written with natural languages. The conventional text models represent the textual documents as bag of words, which include vector space model, Boolean model, statistical model, and tensor space model. These models express documents only with the term literals for indexing and the frequency-based weights for their corresponding terms; that is, they ignore semantical information, sequential order information, and structural information of terms. Most of the text mining techniques have been developed assuming that the given documents are represented as 'bag-of-words' based text models. However, currently, confronting the big data era, a new paradigm of text representation model is required which can analyse huge amounts of textual documents more precisely. Our text model regards the 'concept' as an independent space equated with the 'term' and 'document' spaces used in the vector space model, and it expresses the relatedness among the three spaces. To develop the concept space, we use Wikipedia data, each of which defines a single concept. Consequently, a document collection is represented as a 3-order tensor with semantic information, and then the proposed model is called text cuboid model in our paper. Through experiments using the popular 20NewsGroup document corpus, we prove the superiority of the proposed text model in terms of document clustering and concept clustering.