• Title/Summary/Keyword: Representation of Memory

Search Result 152, Processing Time 0.025 seconds

On B-spline Approximation for Representing Scattered Multivariate Data (비정렬 다변수 데이터의 B-스플라인 근사화 기법)

  • Park, Sang-Kun
    • Transactions of the Korean Society of Mechanical Engineers A
    • /
    • v.35 no.8
    • /
    • pp.921-931
    • /
    • 2011
  • This paper presents a data-fitting technique in which a B-spline hypervolume is used to approximate a given data set of scattered data samples. We describe the implementation of the data structure of a B-spline hypervolume, and we measure its memory size to show that the representation is compact. The proposed technique includes two algorithms. One is for the determination of the knot vectors of a B-spline hypervolume. The other is for the control points, which are determined by solving a linear least-squares minimization problem where the solution is independent of the data-set complexity. The proposed approach is demonstrated with various data-set configurations to reveal its performance in terms of approximation accuracy, memory use, and running time. In addition, we compare our approach with existing methods and present unconstrained optimization examples to show the potential for various applications.

A Space-Efficient Inverted Index Technique using Data Rearrangement for String Similarity Searches (유사도 검색을 위한 데이터 재배열을 이용한 공간 효율적인 역 색인 기법)

  • Im, Manu;Kim, Jongik
    • Journal of KIISE
    • /
    • v.42 no.10
    • /
    • pp.1247-1253
    • /
    • 2015
  • An inverted index structure is widely used for efficient string similarity search. One of the main requirements of similarity search is a fast response time; to this end, most techniques use an in-memory index structure. Since the size of an inverted index structure usually very large, however, it is not practical to assume that an index structure will fit into the main memory. To alleviate this problem, we propose a novel technique that reduces the size of an inverted index. In order to reduce the size of an index, the proposed technique rearranges data strings so that the data strings containing the same q-grams can be placed close to one other. Then, the technique encodes those multiple strings into a range. Through an experimental study using real data sets, we show that our technique significantly reduces the size of an inverted index without sacrificing query processing time.

A Modular Pointer Analysis using Function Summaries (함수 요약을 이용한 모듈단위 포인터분석)

  • Park, Sang-Woon;Kang, Hyun-Goo;Han, Tai-Sook
    • Journal of KIISE:Software and Applications
    • /
    • v.35 no.10
    • /
    • pp.636-652
    • /
    • 2008
  • In this paper, we present a modular pointer analysis algorithm based on the update history. We use the term 'module' to mean a set of mutually recursive procedures and the term 'modular analysis' to mean a program analysis that does not need the source codes of the other modules to analyze a module. Since a modular pointer analysis does not utilize any information on the callers, it is difficult to design a precise analysis that does not lose the information related to the program flow or the calling context. In this paper, we propose a modular and flow- and context-sensitive pointer analysis algorithm based on the update history that can memory states of a procedure independently of the information on the calling context and keep the information on the order of side effects performed. Such a memory representation not only enables the analysis to be formalized as a modular analysis, but also helps the analysis to effectively identify killed side effects and relevant alias contexts.

Algorithm and Design of Double-base Log Encoder for Flash A/D Converters

  • Son, Nguyen-Minh;Kim, In-Soo;Choi, Jae-Ha;Kim, Jong-Soo
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.10 no.4
    • /
    • pp.289-293
    • /
    • 2009
  • This study proposes a novel double-base log encoder (DBLE) for flash Analog-to-Digital converters (ADCs). Analog inputs of flash ADCs are represented in logarithmic number systems with bases of 2 and 3 at the outputs of DBLE. A look up table stores the sets of exponents of base 2 and 3 values. This algorithm improves the performance of a DSP (Digital Signal Processor) system that takes outputs of a flash ADC, since the double-base log number representation does multiplication operation easily within negligible error range in ADC. We have designed and implemented 6 bits DBLE implemented with ROM (Read-Only Memory) architecture in a $0.18\;{\mu}m$ CMOS technology. The power consumption and speed of DBLE are better than the FAT tree and binary ROM encoders at the cost of more chip area. The DBLE can be implemented into SoC architecture with DSP to improve the processing speed.

  • PDF

Video Representation via Fusion of Static and Motion Features Applied to Human Activity Recognition

  • Arif, Sheeraz;Wang, Jing;Fei, Zesong;Hussain, Fida
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.13 no.7
    • /
    • pp.3599-3619
    • /
    • 2019
  • In human activity recognition system both static and motion information play crucial role for efficient and competitive results. Most of the existing methods are insufficient to extract video features and unable to investigate the level of contribution of both (Static and Motion) components. Our work highlights this problem and proposes Static-Motion fused features descriptor (SMFD), which intelligently leverages both static and motion features in the form of descriptor. First, static features are learned by two-stream 3D convolutional neural network. Second, trajectories are extracted by tracking key points and only those trajectories have been selected which are located in central region of the original video frame in order to to reduce irrelevant background trajectories as well computational complexity. Then, shape and motion descriptors are obtained along with key points by using SIFT flow. Next, cholesky transformation is introduced to fuse static and motion feature vectors to guarantee the equal contribution of all descriptors. Finally, Long Short-Term Memory (LSTM) network is utilized to discover long-term temporal dependencies and final prediction. To confirm the effectiveness of the proposed approach, extensive experiments have been conducted on three well-known datasets i.e. UCF101, HMDB51 and YouTube. Findings shows that the resulting recognition system is on par with state-of-the-art methods.

A Study on the Meanning and Characteristic of Schema in the Thinking of Architectural Design - with Kant's Epistemology (건축디자인 사고에서 스키마의 의미와 특성에 관한 연구 - 칸트의 인식론적 관점에서 -)

  • Oh Sin-Wook;Cho Yong-Soo
    • Korean Institute of Interior Design Journal
    • /
    • v.14 no.3 s.50
    • /
    • pp.122-129
    • /
    • 2005
  • The purpose of this study aims at the development of architectural design thinking which has been studied in the Kant's Epistemology to examine the Schema theories. In this point of view, the related theories are reviewed through Expression Form, representation of architectural knowledge and so on. One of these words, the Schema that formed by transcendental an Experience and Memory of transcendental an Experience, not only Play an important Part in the designer's thinking system about a specific problem but also appeared by distinctive features(image) and included by connotation as like a designer's ideology. Speaking briefly, thinking of a designer familiar with us as abstract confusion can be confined such methodological tools as schema, and image, etc., which we can easily understand the relationship among them through mechanism of Kant's [Concept-Schema-Intuition]. Evidences collected from case studies and its application on architectural design yield following results. First, design thinking can be defined as Kant's Epistemology composed of the schema and its extended factors(architectural-schema). Second, design thinking can be revealed different characteristics depending on the degree of the schema and architectural-schema. Lastly, the methodology will be proposed after application of the result to architects' works.

Long-term Preservation of Digital Heritage: Building a National Strategy (디지털유산의 장기적 보존: 국가정책 수립을 위한 제안)

  • Lee, Soo Yeon
    • The Korean Journal of Archival Studies
    • /
    • no.10
    • /
    • pp.27-62
    • /
    • 2004
  • As the penetration of information technology into everyday life is accelerated day by day, virtually all kinds of human representation of knowledge and arts are produced and distributed in the digital form. It is problematic, however, because digital objects are so volatile that it is not easy to keep them in fixed form. The fatal fragility makes it extremely tricky to preserve the digital heritage of our time for the next generation. The present paper aims to introduce current endeavors made at the international and the national levels and to provide with suggestions for Korean national strategy of digital preservation. It starts with reviewing the global trends of digital archiving and long-term preservation, focusing on standardization, preservation strategies and current experiments and projects being conducted for preserving various digital objects. It then sketches national strategies of several leading countries. Based on the sketch, twofold suggestions for Korean national strategy are proposed: establishing a central coordinating agency and accommodating the digital preservation issue in the legislative and regulatory framework for the information society. The paper concludes with the necessity of cooperation among heritage organizations, including libraries, archives, museums. They should cooperate with each other because they have traditionally been trusted with the custodianship of collective memory of humankind and the digital heritage cannot be passed onto the next generation without their endeavor. They should also work together because any single institution, or any single nation could cover what it takes to complete the task of long-term preservation of our digital heritage.

Moving Object Detection Using Sparse Approximation and Sparse Coding Migration

  • Li, Shufang;Hu, Zhengping;Zhao, Mengyao
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.14 no.5
    • /
    • pp.2141-2155
    • /
    • 2020
  • In order to meet the requirements of background change, illumination variation, moving shadow interference and high accuracy in object detection of moving camera, and strive for real-time and high efficiency, this paper presents an object detection algorithm based on sparse approximation recursion and sparse coding migration in subspace. First, low-rank sparse decomposition is used to reduce the dimension of the data. Combining with dictionary sparse representation, the computational model is established by the recursive formula of sparse approximation with the video sequences taken as subspace sets. And the moving object is calculated by the background difference method, which effectively reduces the computational complexity and running time. According to the idea of sparse coding migration, the above operations are carried out in the down-sampling space to further reduce the requirements of computational complexity and memory storage, and this will be adapt to multi-scale target objects and overcome the impact of large anomaly areas. Finally, experiments are carried out on VDAO datasets containing 59 sets of videos. The experimental results show that the algorithm can detect moving object effectively in the moving camera with uniform speed, not only in terms of low computational complexity but also in terms of low storage requirements, so that our proposed algorithm is suitable for detection systems with high real-time requirements.

A Hybrid of Smartphone Camera and Basestation Wide-area Indoor Positioning Method

  • Jiao, Jichao;Deng, Zhongliang;Xu, Lianming;Li, Fei
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.10 no.2
    • /
    • pp.723-743
    • /
    • 2016
  • Indoor positioning is considered an enabler for a variety of applications, the demand for an indoor positioning service has also been accelerated. That is because that people spend most of their time indoor environment. Meanwhile, the smartphone integrated powerful camera is an efficient platform for navigation and positioning. However, for high accuracy indoor positioning by using a smartphone, there are two constraints that includes: (1) limited computational and memory resources of smartphone; (2) users' moving in large buildings. To address those issues, this paper uses the TC-OFDM for calculating the coarse positioning information includes horizontal and altitude information for assisting smartphone camera-based positioning. Moreover, a unified representation model of image features under variety of scenarios whose name is FAST-SURF is established for computing the fine location. Finally, an optimization marginalized particle filter is proposed for fusing the positioning information from TC-OFDM and images. The experimental result shows that the wide location detection accuracy is 0.823 m (1σ) at horizontal and 0.5 m at vertical. Comparing to the WiFi-based and ibeacon-based positioning methods, our method is powerful while being easy to be deployed and optimized.

Thickness and clearance visualization based on distance field of 3D objects

  • Inui, Masatomo;Umezun, Nobuyuki;Wakasaki, Kazuma;Sato, Shunsuke
    • Journal of Computational Design and Engineering
    • /
    • v.2 no.3
    • /
    • pp.183-194
    • /
    • 2015
  • This paper proposes a novel method for visualizing the thickness and clearance of 3D objects in a polyhedral representation. The proposed method uses the distance field of the objects in the visualization. A parallel algorithm is developed for constructing the distance field of polyhedral objects using the GPU. The distance between a voxel and the surface polygons of the model is computed many times in the distance field construction. Similar sets of polygons are usually selected as close polygons for close voxels. By using this spatial coherence, a parallel algorithm is designed to compute the distances between a cluster of close voxels and the polygons selected by the culling operation so that the fast shared memory mechanism of the GPU can be fully utilized. The thickness/clearance of the objects is visualized by distributing points on the visible surfaces of the objects and painting them with a unique color corresponding to the thickness/clearance values at those points. A modified ray casting method is developed for computing the thickness/clearance using the distance field of the objects. A system based on these algorithms can compute the distance field of complex objects within a few minutes for most cases. After the distance field construction, thickness/clearance visualization at a near interactive rate is achieved.