• Title/Summary/Keyword: sequential search method

Search Result 109, Processing Time 0.024 seconds

An Efficient Bitmap Indexing Method for Multimedia Data Reflecting the Characteristics of MPEG-7 Visual Descriptors (MPEG-7 시각 정보 기술자의 특성을 반영한 효율적인 멀티미디어 데이타 비트맵 인덱싱 방법)

  • Jeong Jinguk;Nang Jongho
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.32 no.1
    • /
    • pp.9-20
    • /
    • 2005
  • Recently, the MPEG-7 standard a multimedia content description standard is wide]y used for content based image/video retrieval systems. However, since the descriptors standardized in MPEG-7 are usually multidimensional and the problem called 'Curse of dimensionality', previously proposed indexing methods(for example, multidimensional indexing methods, dimensionality reduction methods, filtering methods, and so on) could not be used to effectively index the multimedia database represented in MPEG-7. This paper proposes an efficient multimedia data indexing mechanism reflecting the characteristics of MPEG-7 visual descriptors. In the proposed indexing mechanism, the descriptor is transformed into a histogram of some attributes. By representing the value of each bin as a binary number, the histogram itself that is a visual descriptor for the object in multimedia database could be represented as a bit string. Bit strings for all objects in multimedia database are collected to form an index file, bitmap index, in the proposed indexing mechanism. By XORing them with the descriptors for query object, the candidate solutions for similarity search could be computed easily and they are checked again with query object to precisely compute the similarity with exact metric such as Ll-norm. These indexing and searching mechanisms are efficient because the filtering process is performed by simple bit-operation and it reduces the search space dramatically. Upon experimental results with more than 100,000 real images, the proposed indexing and searching mechanisms are about IS times faster than the sequential searching with more than 90% accuracy.

Performance Comparison of Spatial Split Algorithms for Spatial Data Analysis on Spark (Spark 기반 공간 분석에서 공간 분할의 성능 비교)

  • Yang, Pyoung Woo;Yoo, Ki Hyun;Nam, Kwang Woo
    • Journal of Korean Society for Geospatial Information Science
    • /
    • v.25 no.1
    • /
    • pp.29-36
    • /
    • 2017
  • In this paper, we implement a spatial big data analysis prototype based on Spark which is an in-memory system and compares the performance by the spatial split algorithm on this basis. In cluster computing environments, big data is divided into blocks of a certain size order to balance the computing load of big data. Existing research showed that in the case of the Hadoop based spatial big data system, the split method by spatial is more effective than the general sequential split method. Hadoop based spatial data system stores raw data as it is in spatial-divided blocks. However, in the proposed Spark-based spatial analysis system, there is a difference that spatial data is converted into a memory data structure and stored in a spatial block for search efficiency. Therefore, in this paper, we propose an in-memory spatial big data prototype and a spatial split block storage method. Also, we compare the performance of existing spatial split algorithms in the proposed prototype. We presented an appropriate spatial split strategy with the Spark based big data system. In the experiment, we compared the query execution time of the spatial split algorithm, and confirmed that the BSP algorithm shows the best performance.

Efficient Collaboration Method Between CPU and GPU for Generating All Possible Cases in Combination (조합에서 모든 경우의 수를 만들기 위한 CPU와 GPU의 효율적 협업 방법)

  • Son, Ki-Bong;Son, Min-Young;Kim, Young-Hak
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.7 no.9
    • /
    • pp.219-226
    • /
    • 2018
  • One of the systematic ways to generate the number of all cases is a combination to construct a combination tree, and its time complexity is O($2^n$). A combination tree is used for various purposes such as the graph homogeneity problem, the initial model for calculating frequent item sets, and so on. However, algorithms that must search the number of all cases of a combination are difficult to use realistically due to high time complexity. Nevertheless, as the amount of data becomes large and various studies are being carried out to utilize the data, the number of cases of searching all cases is increasing. Recently, as the GPU environment becomes popular and can be easily accessed, various attempts have been made to reduce time by parallelizing algorithms having high time complexity in a serial environment. Because the method of generating the number of all cases in combination is sequential and the size of sub-task is biased, it is not suitable for parallel implementation. The efficiency of parallel algorithms can be maximized when all threads have tasks with similar size. In this paper, we propose a method to efficiently collaborate between CPU and GPU to parallelize the problem of finding the number of all cases. In order to evaluate the performance of the proposed algorithm, we analyze the time complexity in the theoretical aspect, and compare the experimental time of the proposed algorithm with other algorithms in CPU and GPU environment. Experimental results show that the proposed CPU and GPU collaboration algorithm maintains a balance between the execution time of the CPU and GPU compared to the previous algorithms, and the execution time is improved remarkable as the number of elements increases.

Calibration of Car-Following Models Using a Dual Genetic Algorithm with Central Composite Design (중심합성계획법 기반 이중유전자알고리즘을 활용한 차량추종모형 정산방법론 개발)

  • Bae, Bumjoon;Lim, Hyeonsup;So, Jaehyun (Jason)
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.18 no.2
    • /
    • pp.29-43
    • /
    • 2019
  • The calibration of microscopic traffic simulation models has received much attention in the simulation field. Although no standard has been established for it, a genetic algorithm (GA) has been widely employed in recent literature because of its high efficiency to find solutions in such optimization problems. However, the performance still falls short in simulation analyses to support fast decision making. This paper proposes a new calibration procedure using a dual GA and central composite design (CCD) in order to improve the efficiency. The calibration exercise goes through three major sequential steps: (1) experimental design using CCD for a quadratic response surface model (RSM) estimation, (2) 1st GA procedure using the RSM with CCD to find a near-optimal initial population for a next step, and (3) 2nd GA procedure to find a final solution. The proposed method was applied in calibrating the Gipps car-following model with respect to maximizing the likelihood of a spacing distribution between a lead and following vehicle. In order to evaluate the performance of the proposed method, a conventional calibration approach using a single GA was compared under both simulated and real vehicle trajectory data. It was found that the proposed approach enhances the optimization speed by starting to search from an initial population that is closer to the optimum than that of the other approach. This result implies the proposed approach has benefits for a large-scale traffic network simulation analysis. This method can be extended to other optimization tasks using GA in transportation studies.

Index-based Searching on Timestamped Event Sequences (타임스탬프를 갖는 이벤트 시퀀스의 인덱스 기반 검색)

  • 박상현;원정임;윤지희;김상욱
    • Journal of KIISE:Databases
    • /
    • v.31 no.5
    • /
    • pp.468-478
    • /
    • 2004
  • It is essential in various application areas of data mining and bioinformatics to effectively retrieve the occurrences of interesting patterns from sequence databases. For example, let's consider a network event management system that records the types and timestamp values of events occurred in a specific network component(ex. router). The typical query to find out the temporal casual relationships among the network events is as fellows: 'Find all occurrences of CiscoDCDLinkUp that are fellowed by MLMStatusUP that are subsequently followed by TCPConnectionClose, under the constraint that the interval between the first two events is not larger than 20 seconds, and the interval between the first and third events is not larger than 40 secondsTCPConnectionClose. This paper proposes an indexing method that enables to efficiently answer such a query. Unlike the previous methods that rely on inefficient sequential scan methods or data structures not easily supported by DBMSs, the proposed method uses a multi-dimensional spatial index, which is proven to be efficient both in storage and search, to find the answers quickly without false dismissals. Given a sliding window W, the input to a multi-dimensional spatial index is a n-dimensional vector whose i-th element is the interval between the first event of W and the first occurrence of the event type Ei in W. Here, n is the number of event types that can be occurred in the system of interest. The problem of‘dimensionality curse’may happen when n is large. Therefore, we use the dimension selection or event type grouping to avoid this problem. The experimental results reveal that our proposed technique can be a few orders of magnitude faster than the sequential scan and ISO-Depth index methods.hods.

Design and Implement an Internet-Based Courseware (인터넷 기반의 코스웨어의 설계 및 구현)

  • Lee, Geon-Jin
    • Journal of The Korean Association of Information Education
    • /
    • v.1 no.1
    • /
    • pp.82-91
    • /
    • 1997
  • The purpose of thesis is to design and implement an efficient Internet-Based courseware which facilitates the problem solving learning. This courseware was developed in order to provide important foundations of learning in open-education environment using WWW. The targeted level is elementary students, To do this, the definition of problem solving, its processes, and advantages or pitfalls of computer-based problem solving learning were examined, with the advantage of using WWW as an educational tool. The theme of implemented courseware was selected from SATIS which is relevant for the problem solving learning. The courseware has three main parts; learning activity module, teaching activity module, and learning tool module. The learning activity module controls courseware flows and was implemented in accordance with the problem-based teaming processes. It: can be proceeded either sequential way or random access by setting linker. The advantage of random accessing method is that it may facilitate student learning because each student can regulate their learning processes which correspond to their own experiences. The teaching activity module provides for teachers useful informations for helping student's learning and it also can be used as an assessment tool for student's achievements, The learning: tool module consists of conversational note, e-mail address, help, and search tool. It is linked with learning activity module and teaching activity module so that teachers and students can actively participate in teaching-learning processes.

  • PDF

The Design of Transform and Quantization Hardware for High-Performance HEVC Encoder (고성능 HEVC 부호기를 위한 변환양자화기 하드웨어 설계)

  • Park, Seungyong;Jo, Heungseon;Ryoo, Kwangki
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.20 no.2
    • /
    • pp.327-334
    • /
    • 2016
  • In this paper, we propose a hardware architecture of transform and quantization for high-perfornamce HEVC(High Efficiency VIdeo Coding) encoder. HEVC transform decides the transform mode by comparing RDCost to search for the best mode of them. But, RDCost is computed using the bit-rate and distortion which is computed by transform, quantization, de-quantization, and inverse transform. Due to the many calculations and encoding time, it is hard to process high resolution and high definition image in real-time. This paper proposes the method of transform mode decision by comparing sum of coefficient after transform only. We use BD-PSNR and BD-Bitrate which is performance indicator. Based on the experimental result, We confirmed that the decision of transform mode can process images with no significant change in the image quality. We reduced hardware area by assigning different values at the same output according to the transform mode and overlapping coefficient multiplied as much as possible. Also, we raise performance by implementing sequential pipeline operation. In view of the larger process that we used compared with the process of reference paper, Our design has reduced by half the hardware area and has increased performance 2.3 times.

Information-Seeking Pathways by Mothers in the Context of Their Children's Health (어린이 건강과 관련한 어머니들의 정보탐색 경로)

  • Lee, Hanseul
    • Journal of Korean Library and Information Science Society
    • /
    • v.52 no.3
    • /
    • pp.21-48
    • /
    • 2021
  • Today, with countless health information being accessible through online and offline, the public has been able to explore health-related information in various ways. The current study focuses on the information-seeking behavior of the mothers who actively explore information related to the health of their healthy infants (aged between 0 and 3 years). The researcher had conducted in-depth interviews of 24 American, Korean, and Korean immigrant mothers living in the United States, and then analyzed the sequential order of the information sources that they have used to search for the health-related information about their children. The current research highlights that the mothers' information-seeking pathways and searched topics tended to differ in accordance with their child(ren)'s health conditions (e.g., ill vs. healthy). For instance, regarding the information sources used, more diverse health information sources (e.g., public libraries, government health agencies, daycare teachers) were used when their child(ren) was not ill. In addition, when a child was ill, mothers were likely to focus on information about specific diseases or symptoms first, whereas when the child was healthy, they used to explore information on various health topics such as growth and development, nutrition and diets, parenting, and so on. Based on the results, implications for the information professionals are discussed when designing and providing health-related information services to mothers of healthy infants and toddlers.

The Comparison of Susceptibility Changes in 1.5T and3.0T MRIs due to TE Change in Functional MRI (뇌 기능영상에서의 TE값의 변화에 따른 1.5T와 3.0T MRI의 자화율 변화 비교)

  • Kim, Tae;Choe, Bo-Young;Kim, Euy-Neyng;Suh, Tae-Suk;Lee, Heung-Kyu;Shinn, Kyung-Sub
    • Investigative Magnetic Resonance Imaging
    • /
    • v.3 no.2
    • /
    • pp.154-158
    • /
    • 1999
  • Purpose : The purpose of this study was to find the optimum TE value for enhancing $T_2^{*}$ weighting effect and minimizing the SNR degradation and to compare the BOLD effects according to the changes of TE in 1.5T and 3.0T MRI systems. Materials and Methods : Healthy normal volunteers (eight males and two females with 24-38 years old) participated in this study. Each volunteer was asked to perform a simple finger-tapping task (sequential opposition of thumb to each of the other four fingers) with right hand with a mean frequency of about 2Hz. The stimulus was initially off for 3 images and was then alternatively switched on and off for 2 cycles of 6 images. Images were acquired on the 1.5T and 3.0T MRI with the FLASH (fast low angle shot) pulse sequence (TR : 100ms, FA : $20^{\circ}$, FOV : 230mm) that was used with 26, 36, 46, 56, 66, 76ms of TE times in 1.5T and 16, 26, 36, 46, 56, 66ms of TE in 3.0T MRI system. After the completion of scan, MR images were transferred into a PC and processed with a home-made analysis program based on the correlation coefficient method with the threshold value of 0.45. To search for the optimum TE value in fMRI, the difference between the activation and the rest by the susceptibility change for each TE was used in 1.5T and 3.0T respectively. In addition, the functional $T_2^{*}$ map was calculated to quantify susceptibility change. Results : The calculated optimum TE for fMRI was $61.89{\pm}2.68$ at 1.5T and $47.64{\pm}13.34$ at 3.0T. The maximum percentage of signal intensity change due to the susceptibility effect inactivation region was 3.36% at TE 66ms in 1.5T 10.05% at TE 46ms in 3.0T, respectively. The signal intensity change of 3.0T was about 3 times bigger than of 1.5T. The calculated optimum TE value was consistent with TE values which were obtained from the maximum signal change for each TE. Conclusion : In this study, the 3.0T MRI was clearly more sensitive, about three times bigger than the 1.5T in detecting the susceptibility due to the deoxyhemoglobin level change in the functional MR imaging. So the 3.0T fMRI I ore useful than 1.5T.

  • PDF