• Title/Summary/Keyword: in-memory computing

Search Result 766, Processing Time 0.03 seconds

Design and Implementation of the Extended SLDS for Real-time Location Based Services (실시간 위치 기반 서비스를 위한 확장 SLDS 설계 및 구현)

  • Lee, Seung-Won;Kang, Hong-Koo;Hong, Dong-Suk;Han, Ki-Joon
    • Journal of Korea Spatial Information System Society
    • /
    • v.7 no.2 s.14
    • /
    • pp.47-56
    • /
    • 2005
  • Recently, with the rapid development of mobile computing, wireless positioning technologies, and the generalization of wireless internet, LBS (Location Based Service) which utilizes location information of moving objects is serving in many fields. In order to serve LBS efficiently, the location data server that periodically stores location data of moving objects is required. Formerly, GIS servers have been used to store location data of moving objects. However, GIS servers are not suitable to store location data of moving objects because it was designed to store static data. Therefore, in this paper, we designed and implemented an extended SLDS(Short-term Location Data Subsystem) for real-time Location Based Services. The extended SLDS is extended from the SLDS which is a subsystem of the GALIS(Gracefully Aging Location Information System) architecture that was proposed as a cluster-based distributed computing system architecture for managing location data of moving objects. The extended SLDS guarantees real-time service capabilities using the TMO(Time-triggered Message-triggered Object) programming scheme and efficiently manages large volume of location data through distributing moving object data over multiple nodes. The extended SLDS also has a little search and update overhead because of managing location data in main memory. In addition, we proved that the extended SLDS stores location data and performs load distribution more efficiently than the original SLDS through the performance evaluation.

  • PDF

Method Decoder for Low-Cost RFID Tags

  • Juels, Ari
    • 한국정보컨버전스학회:학술대회논문집
    • /
    • 2008.06a
    • /
    • pp.47-52
    • /
    • 2008
  • A radio-frequency identification(RFID) tag is a small, inexpensive microchip that emits an identifier in response to a query from a nearby reader. The price of these tags promises to drop to the range of $0.05 per unit in the next several years, offering a viable and powerful replacement for barcodes. The challenge in providing security for low-cost RFID tags is that they are computationally weak devices, unable to perform even basic symmetric-key cryptographic operations. Security researchers often therefore assume that good privacy protection in RFID tags is unattainable. In this paper, we explore a notion of minimalist cryptography suitable for RFID tags. We consider the type of security obtainable in RFID devices with a small amount of rewritable memory, but very limited computing capability. Our aim is to show that standard cryptography is not necessary as a starting point for improving security of very weak RFID devices. Our contribution is threefold: 1. We propose a new formal security model for authentication and privacy in RFID tags. This model takes into account the natural computational limitations and the likely attack scenarios for RFID tags in real-world settings. It represents a useful divergence from standard cryptographic security modeling, and thus a new view of practical formalization of minimal security requirements for low-cost RFID-tag security. 2. We describe protocol that provably achieves the properties of authentication and privacy in RFID tags in our proposed model, and in a good practical sense. Our proposed protocol involves no computationally intensive cryptographic operations, and relatively little storage. 3. Of particular practical interest, we describe some reduced-functionality variants of our protocol. We show, for instance, how static pseudonyms may considerably enhance security against eavesdropping in low-cost RFID tags. Our most basic static-pseudonym proposals require virtually no increase in existing RFID tag resources.

  • PDF

A Study on Architecture of Motion Compensator for H.264/AVC Encoder (H.264/AVC부호화기용 움직임 보상기의 아키텍처 연구)

  • Kim, Won-Sam;Sonh, Seung-Il;Kang, Min-Goo
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.12 no.3
    • /
    • pp.527-533
    • /
    • 2008
  • Motion compensation always produces the principal bottleneck in the real-time high quality video applications. Therefore, a fast dedicated hardware is needed to perform motion compensation in the real-time video applications. In many video encoding methods, the frames are partitioned into blocks of Pixels. In general, motion compensation predicts present block by estimating the motion from previous frame. In motion compensation, the higher pixel accuracy shows the better performance but the computing complexity is increased. In this paper, we studied an architecture of motion compensator suitable for H.264/AVC encoder that supports quarter-pixel accuracy. The designed motion compensator increases the throughput using transpose array and 3 6-tap Luma filters and efficiently reduces the memory access. The motion compensator is described in VHDL and synthesized in Xilinx ISE and verified using Modelsim_6.1i. Our motion compensator uses 36-tap filters only and performs in 640 clock-cycle per macro block. The motion compensator proposed in this paper is suitable to the areas that require the real-time video processing.

Design and Forensic Analysis of a Zero Trust Model for Amazon S3 (Amazon S3 제로 트러스트 모델 설계 및 포렌식 분석)

  • Kyeong-Hyun Cho;Jae-Han Cho;Hyeon-Woo Lee;Jiyeon Kim
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.33 no.2
    • /
    • pp.295-303
    • /
    • 2023
  • As the cloud computing market grows, a variety of cloud services are now reliably delivered. Administrative agencies and public institutions of South Korea are transferring all their information systems to cloud systems. It is essential to develop security solutions in advance in order to safely operate cloud services, as protecting cloud services from misuse and malicious access by insiders and outsiders over the Internet is challenging. In this paper, we propose a zero trust model for cloud storage services that store sensitive data. We then verify the effectiveness of the proposed model by operating a cloud storage service. Memory, web, and network forensics are also performed to track access and usage of cloud users depending on the adoption of the zero trust model. As a cloud storage service, we use Amazon S3(Simple Storage Service) and deploy zero trust techniques such as access control lists and key management systems. In order to consider the different types of access to S3, furthermore, we generate service requests inside and outside AWS(Amazon Web Services) and then analyze the results of the zero trust techniques depending on the location of the service request.

A GPU-enabled Face Detection System in the Hadoop Platform Considering Big Data for Images (이미지 빅데이터를 고려한 하둡 플랫폼 환경에서 GPU 기반의 얼굴 검출 시스템)

  • Bae, Yuseok;Park, Jongyoul
    • KIISE Transactions on Computing Practices
    • /
    • v.22 no.1
    • /
    • pp.20-25
    • /
    • 2016
  • With the advent of the era of digital big data, the Hadoop platform has become widely used in various fields. However, the Hadoop MapReduce framework suffers from problems related to the increase of the name node's main memory and map tasks for the processing of large number of small files. In addition, a method for running C++-based tasks in the MapReduce framework is required in order to conjugate GPUs supporting hardware-based data parallelism in the MapReduce framework. Therefore, in this paper, we present a face detection system that generates a sequence file for images to process big data for images in the Hadoop platform. The system also deals with tasks for GPU-based face detection in the MapReduce framework using Hadoop Pipes. We demonstrate a performance increase of around 6.8-fold as compared to a single CPU process.

Personal Driving Style based ADAS Customization using Machine Learning for Public Driving Safety

  • Giyoung Hwang;Dongjun Jung;Yunyeong Goh;Jong-Moon Chung
    • Journal of Internet Computing and Services
    • /
    • v.24 no.1
    • /
    • pp.39-47
    • /
    • 2023
  • The development of autonomous driving and Advanced Driver Assistance System (ADAS) technology has grown rapidly in recent years. As most traffic accidents occur due to human error, self-driving vehicles can drastically reduce the number of accidents and crashes that occur on the roads today. Obviously, technical advancements in autonomous driving can lead to improved public driving safety. However, due to the current limitations in technology and lack of public trust in self-driving cars (and drones), the actual use of Autonomous Vehicles (AVs) is still significantly low. According to prior studies, people's acceptance of an AV is mainly determined by trust. It is proven that people still feel much more comfortable in personalized ADAS, designed with the way people drive. Based on such needs, a new attempt for a customized ADAS considering each driver's driving style is proposed in this paper. Each driver's behavior is divided into two categories: assertive and defensive. In this paper, a novel customized ADAS algorithm with high classification accuracy is designed, which divides each driver based on their driving style. Each driver's driving data is collected and simulated using CARLA, which is an open-source autonomous driving simulator. In addition, Long Short-Term Memory (LSTM) and Gated Recurrent Unit (GRU) machine learning algorithms are used to optimize the ADAS parameters. The proposed scheme results in a high classification accuracy of time series driving data. Furthermore, among the vast amount of CARLA-based feature data extracted from the drivers, distinguishable driving features are collected selectively using Support Vector Machine (SVM) technology by comparing the amount of influence on the classification of the two categories. Therefore, by extracting distinguishable features and eliminating outliers using SVM, the classification accuracy is significantly improved. Based on this classification, the ADAS sensors can be made more sensitive for the case of assertive drivers, enabling more advanced driving safety support. The proposed technology of this paper is especially important because currently, the state-of-the-art level of autonomous driving is at level 3 (based on the SAE International driving automation standards), which requires advanced functions that can assist drivers using ADAS technology.

A Design on Informal Big Data Topic Extraction System Based on Spark Framework (Spark 프레임워크 기반 비정형 빅데이터 토픽 추출 시스템 설계)

  • Park, Kiejin
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.5 no.11
    • /
    • pp.521-526
    • /
    • 2016
  • As on-line informal text data have massive in its volume and have unstructured characteristics in nature, there are limitations in applying traditional relational data model technologies for data storage and data analysis jobs. Moreover, using dynamically generating massive social data, social user's real-time reaction analysis tasks is hard to accomplish. In the paper, to capture easily the semantics of massive and informal on-line documents with unsupervised learning mechanism, we design and implement automatic topic extraction systems according to the mass of the words that consists a document. The input data set to the proposed system are generated first, using N-gram algorithm to build multiple words to capture the meaning of the sentences precisely, and Hadoop and Spark (In-memory distributed computing framework) are adopted to run topic model. In the experiment phases, TB level input data are processed for data preprocessing and proposed topic extraction steps are applied. We conclude that the proposed system shows good performance in extracting meaningful topics in time as the intermediate results come from main memories directly instead of an HDD reading.

Tunnel wall convergence prediction using optimized LSTM deep neural network

  • Arsalan, Mahmoodzadeh;Mohammadreza, Taghizadeh;Adil Hussein, Mohammed;Hawkar Hashim, Ibrahim;Hanan, Samadi;Mokhtar, Mohammadi;Shima, Rashidi
    • Geomechanics and Engineering
    • /
    • v.31 no.6
    • /
    • pp.545-556
    • /
    • 2022
  • Evaluation and optimization of tunnel wall convergence (TWC) plays a vital role in preventing potential problems during tunnel construction and utilization stage. When convergence occurs at a high rate, it can lead to significant problems such as reducing the advance rate and safety, which in turn increases operating costs. In order to design an effective solution, it is important to accurately predict the degree of TWC; this can reduce the level of concern and have a positive effect on the design. With the development of soft computing methods, the use of deep learning algorithms and neural networks in tunnel construction has expanded in recent years. The current study aims to employ the long-short-term memory (LSTM) deep neural network predictor model to predict the TWC, based on 550 data points of observed parameters developed by collecting required data from different tunnelling projects. Among the data collected during the pre-construction and construction phases of the project, 80% is randomly used to train the model and the rest is used to test the model. Several loss functions including root mean square error (RMSE) and coefficient of determination (R2) were used to assess the performance and precision of the applied method. The results of the proposed models indicate an acceptable and reliable accuracy. In fact, the results show that the predicted values are in good agreement with the observed actual data. The proposed model can be considered for use in similar ground and tunneling conditions. It is important to note that this work has the potential to reduce the tunneling uncertainties significantly and make deep learning a valuable tool for planning tunnels.

Load Shedding via Predicting the Frequency of Tuple for Efficient Analsis over Data Streams (효율적 데이터 스트림 분석을 위한 발생빈도 예측 기법을 이용한 과부하 처리)

  • Chang, Joong-Hyuk
    • The KIPS Transactions:PartD
    • /
    • v.13D no.6 s.109
    • /
    • pp.755-764
    • /
    • 2006
  • In recent, data streams are generated in various application fields such as a ubiquitous computing and a sensor network, and various algorithms are actively proposed for processing data streams efficiently. They mainly focus on the restriction of their memory usage and minimization of their processing time per data element. However, in the algorithms, if data elements of a data stream are generated in a rapid rate for a time unit, some of the data elements cannot be processed in real time. Therefore, an efficient load shedding technique is required to process data streams effcientlv. For this purpose, a load shedding technique over a data stream is proposed in this paper, which is based on the predicting technique of the frequency of data element considering its current frequency. In the proposed technique, considering the change of the data stream, its threshold for tuple alive is controlled adaptively. It can help to prevent unnecessary load shedding.

Big Data Meets Telcos: A Proactive Caching Perspective

  • Bastug, Ejder;Bennis, Mehdi;Zeydan, Engin;Kader, Manhal Abdel;Karatepe, Ilyas Alper;Er, Ahmet Salih;Debbah, Merouane
    • Journal of Communications and Networks
    • /
    • v.17 no.6
    • /
    • pp.549-557
    • /
    • 2015
  • Mobile cellular networks are becoming increasingly complex to manage while classical deployment/optimization techniques and current solutions (i.e., cell densification, acquiring more spectrum, etc.) are cost-ineffective and thus seen as stopgaps. This calls for development of novel approaches that leverage recent advances in storage/memory, context-awareness, edge/cloud computing, and falls into framework of big data. However, the big data by itself is yet another complex phenomena to handle and comes with its notorious 4V: Velocity, voracity, volume, and variety. In this work, we address these issues in optimization of 5G wireless networks via the notion of proactive caching at the base stations. In particular, we investigate the gains of proactive caching in terms of backhaul offloadings and request satisfactions, while tackling the large-amount of available data for content popularity estimation. In order to estimate the content popularity, we first collect users' mobile traffic data from a Turkish telecom operator from several base stations in hours of time interval. Then, an analysis is carried out locally on a big data platformand the gains of proactive caching at the base stations are investigated via numerical simulations. It turns out that several gains are possible depending on the level of available information and storage size. For instance, with 10% of content ratings and 15.4Gbyte of storage size (87%of total catalog size), proactive caching achieves 100% of request satisfaction and offloads 98% of the backhaul when considering 16 base stations.