• Title/Summary/Keyword: and Information Retrieval

Search Result 3,446, Processing Time 0.034 seconds

A Pipelined Parallel Optimized Design for Convolution-based Non-Cascaded Architecture of JPEG2000 DWT (JPEG2000 이산웨이블릿변환의 컨볼루션기반 non-cascaded 아키텍처를 위한 pipelined parallel 최적화 설계)

  • Lee, Seung-Kwon;Kong, Jin-Hyeung
    • Journal of the Institute of Electronics Engineers of Korea SD
    • /
    • v.46 no.7
    • /
    • pp.29-38
    • /
    • 2009
  • In this paper, a high performance pipelined computing design of parallel multiplier-temporal buffer-parallel accumulator is present for the convolution-based non-cascaded architecture aiming at the real time Discrete Wavelet Transform(DWT) processing. The convolved multiplication of DWT would be reduced upto 1/4 by utilizing the filter coefficients symmetry and the up/down sampling; and it could be dealt with 3-5 times faster computation by LUT-based DA multiplication of multiple filter coefficients parallelized for product terms with an image data. Further, the reutilization of computed product terms could be achieved by storing in the temporal buffer, which yields the saving of computation as well as dynamic power by 50%. The convolved product terms of image data and filter coefficients are realigned and stored in the temporal buffer for the accumulated addition. Then, the buffer management of parallel aligned storage is carried out for the high speed sequential retrieval of parallel accumulations. The convolved computation is pipelined with parallel multiplier-temporal buffer-parallel accumulation in which the parallelization of temporal buffer and accumulator is optimize, with respect to the performance of parallel DA multiplier, to improve the pipelining performance. The proposed architecture is back-end designed with 0.18um library, which verifies the 30fps throughput of SVGA(800$\times$600) images at 90MHz.

Is it necessary to distinguish semantic memory from episodic memory\ulcorner (의미기억과 일화기억의 구분은 필요한가)

  • 이정모;박희경
    • Korean Journal of Cognitive Science
    • /
    • v.11 no.3_4
    • /
    • pp.33-43
    • /
    • 2000
  • The distinction between short-term store (STS) and long-term store (LTS) has been made in the perspective of information processing. Memory system theorists have argued that memory could be conceived as multiple memory systems beyond the concept of a single LTS. Popular memory system models are Schacter & Tulving (994)'s multiple memory systems and Squire (987)'s the taxonomy of long-term memory. Those m models agree that amnesic patients have intact STS but impaired LTS and have preserved implicit memory. However. there is a debate about the nature of the long-term memory impairment. One model considers amnesic deficit as a selective episodic memory impairment. whereas the other sees the deficits as both episodic and semantic memory impairment. At present, it remains unclear that episodic memory should be distinguished from semantic memory in terms of retrieval operation. The distinction between declarative memory and nondeclarative memory would be the alternative way to reflect explicit memory and implicit memory. The research focused on the function of frontal lobe might give clues to the debate about the nature of LTS.

  • PDF

A Korean Community-based Question Answering System Using Multiple Machine Learning Methods (다중 기계학습 방법을 이용한 한국어 커뮤니티 기반 질의-응답 시스템)

  • Kwon, Sunjae;Kim, Juae;Kang, Sangwoo;Seo, Jungyun
    • Journal of KIISE
    • /
    • v.43 no.10
    • /
    • pp.1085-1093
    • /
    • 2016
  • Community-based Question Answering system is a system which provides answers for each question from the documents uploaded on web communities. In order to enhance the capacity of question analysis, former methods have developed specific rules suitable for a target region or have applied machine learning to partial processes. However, these methods incur an excessive cost for expanding fields or lead to cases in which system is overfitted for a specific field. This paper proposes a multiple machine learning method which automates the overall process by adapting appropriate machine learning in each procedure for efficient processing of community-based Question Answering system. This system can be divided into question analysis part and answer selection part. The question analysis part consists of the question focus extractor, which analyzes the focused phrases in questions and uses conditional random fields, and the question type classifier, which classifies topics of questions and uses support vector machine. In the answer selection part, the we trains weights that are used by the similarity estimation models through an artificial neural network. Also these are a number of cases in which the results of morphological analysis are not reliable for the data uploaded on web communities. Therefore, we suggest a method that minimizes the impact of morphological analysis by using character features in the stage of question analysis. The proposed system outperforms the former system by showing a Mean Average Precision criteria of 0.765 and R-Precision criteria of 0.872.

Improvement of Cloud-data Filtering Method Using Spectrum of AERI (AERI 스펙트럼 분석을 통한 구름에 영향을 받은 스펙트럼 자료 제거 방법 개선)

  • Cho, Joon-Sik;Goo, Tae-Young;Shin, Jinho
    • Korean Journal of Remote Sensing
    • /
    • v.31 no.2
    • /
    • pp.137-148
    • /
    • 2015
  • The National Institute of Meteorological Research (NIMR) has operated the Fourier Transform InfraRed (FTIR) spectrometer which is the Atmospheric Emitted Radiance Interferometer (AERI) in Anmyeon island, Korea since June 2010. The ground-based AERI with similar hyper-spectral infrared sensor to satellite could be an alternative way to validate satellite-based remote sensing. In this regard, the NIMR has focused on the improvement of retrieval quality from the AERI, particularly cloud-data filtering method. The AERI spectrum which is measured on a typical clear day is selected reference spectrum and we used region of atmospheric window. We performed test of threshold in order to select valid threshold. We retrieved methane using new method which is used reference spectrum, and the other method which is used KLAPS cloud cover information, each retrieved methane was compared with that of ground-based in-situ measurements. The quality of AERI methane retrievals of new method was significantly more improved than method of used KLAPS. In addition, the comparison of vertical total column of methane from AERI and GOSAT shows good result.

Automatic Text Categorization Using Passage-based Weight Function and Passage Type (문단 단위 가중치 함수와 문단 타입을 이용한 문서 범주화)

  • Joo, Won-Kyun;Kim, Jin-Suk;Choi, Ki-Seok
    • The KIPS Transactions:PartB
    • /
    • v.12B no.6 s.102
    • /
    • pp.703-714
    • /
    • 2005
  • Researches in text categorization have been confined to whole-document-level classification, probably due to lacks of full-text test collections. However, full-length documents availably today in large quantities pose renewed interests in text classification. A document is usually written in an organized structure to present its main topic(s). This structure can be expressed as a sequence of sub-topic text blocks, or passages. In order to reflect the sub-topic structure of a document, we propose a new passage-level or passage-based text categorization model, which segments a test document into several Passages, assigns categories to each passage, and merges passage categories to document categories. Compared with traditional document-level categorization, two additional steps, passage splitting and category merging, are required in this model. By using four subsets of Routers text categorization test collection and a full-text test collection of which documents are varying from tens of kilobytes to hundreds, we evaluated the proposed model, especially the effectiveness of various passage types and the importance of passage location in category merging. Our results show simple windows are best for all test collections tested in these experiments. We also found that passages have different degrees of contribution to main topic(s), depending on their location in the test document.

Estimation of nighttime aerosol optical thickness from Suomi-NPP DNB observations over small cities in Korea (Suomi-NPP위성 DNB관측을 이용한 우리나라 소도시에서의 야간 에어로졸 광학두께 추정)

  • Choo, Gyo-Hwang;Jeong, Myeong-Jae
    • Korean Journal of Remote Sensing
    • /
    • v.32 no.2
    • /
    • pp.73-86
    • /
    • 2016
  • In this study, an algorithm to estimate Aerosol Optical Thickness (AOT) over small cities during nighttime has been developed by using the radiance from artificial light sources in small cities measured from Visible Infrared Imaging Radiometer Suite (VIIRS) sensor's Day/Night Band (DNB) aboard the Suomi-National Polar Partnership (Suomi-NPP) satellite. The algorithm is based on Beer's extinction law with the light sources from the artificial lights over small cities. AOT is retrieved for cloud-free pixels over individual cities, and cloud-screening was conducted by using the measurements from M-bands of VIIRS at infrared wavelengths. The retrieved nighttime AOT is compared with the aerosol products from MODerate resolution Imaging Spectroradiometer (MODIS) aboard Terra and Aqua satellites. As a result, the correlation coefficients over individual cities range from around 0.6 and 0.7 between the retrieved nighttime AOT and MODIS AOT with Root-Mean-Squared Difference (RMSD) ranged from 0.14 to 0.18. In addition, sensitivity tests were conducted for the factors affecting the nighttime AOT to estimate the range of uncertainty in the nighttime AOT retrievals. The results of this study indicate that it is promising to infer AOT using the DNB measaurements over small cities in Korea at night. After further development and refinement in the future, the developed retrieval algorithm is expected to produce nighttime aerosol information which is not operationally available over Korea.

Design and Implementation of Multiple Filter Distributed Deduplication System Applying Cuckoo Filter Similarity (쿠쿠 필터 유사도를 적용한 다중 필터 분산 중복 제거 시스템 설계 및 구현)

  • Kim, Yeong-A;Kim, Gea-Hee;Kim, Hyun-Ju;Kim, Chang-Geun
    • Journal of Convergence for Information Technology
    • /
    • v.10 no.10
    • /
    • pp.1-8
    • /
    • 2020
  • The need for storage, management, and retrieval techniques for alternative data has emerged as technologies based on data generated from business activities conducted by enterprises have emerged as the key to business success in recent years. Existing big data platform systems must load a large amount of data generated in real time without delay to process unstructured data, which is an alternative data, and efficiently manage storage space by utilizing a deduplication system of different storages when redundant data occurs. In this paper, we propose a multi-layer distributed data deduplication process system using the similarity of the Cuckoo hashing filter technique considering the characteristics of big data. Similarity between virtual machines is applied as Cuckoo hash, individual storage nodes can improve performance with deduplication efficiency, and multi-layer Cuckoo filter is applied to reduce processing time. Experimental results show that the proposed method shortens the processing time by 8.9% and increases the deduplication rate by 10.3%.

VP Filtering for Efficient Query Processing in R-tree Variants Index Structures (R-tree 계열의 인덱싱 구조에서의 효율적 질의 처리를 위한 VP 필터링)

  • Kim, Byung-Gon;Lee, Jae-Ho;Lim, Hae-Chull
    • Journal of KIISE:Databases
    • /
    • v.29 no.6
    • /
    • pp.453-463
    • /
    • 2002
  • With the prevalence of multi-dimensional data such as images, content-based retrieval of data is becoming increasingly important. To handle multi-dimensional data, multi-dimensional index structures such as the R-tree, Rr-tree, TV-tree, and MVP-tree have been proposed. Numerous research results on how to effectively manipulate these structures have been presented during the last decade. Query processing strategies, which is important for reducing the processing time, is one such area of research. In this paper, we propose query processing algorithms for R-tree based structures. The novel aspect of these algorithms is that they make use of the notion of VP filtering, a concept borrowed from the MVP-tree. The filtering notion allows for delaying of computational overhead until absolutely necessary. By so doing, we attain considerable performance benefits while paying insignificant overhead during the construction of the index structure. We implemented our algorithms and carried out experiments to demonstrate the capability and usefulness of our method. Both for range query and incremental query, for all dimensional index trees, the response time using VP filtering was always shorter than without VP filtering. We quantitatively showed that VP filtering is closely related with the response time of the query.

A Reduction Method of Over-Segmented Regions at Image Segmentation based on Homogeneity Threshold (동질성 문턱 값 기반 영상분할에서 과분할 영역 축소 방법)

  • Han, Gi-Tae
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.1 no.1
    • /
    • pp.55-68
    • /
    • 2012
  • In this paper, we propose a novel method to solve the problem of excessive segmentation out of the method of segmenting regions from an image using Homogeneity Threshold($H_T$). The algorithm of the previous image segmentation based on $H_T$ was carried out region growth by using only the center pixel of selected window. Therefore it was caused resulting in excessive segmented regions. However, before carrying region growth, the proposed method first of all finds out whether the selected window is homogeneity or not. Subsequently, if the selected window is homogeneity it carries out region growth using the total pixels of selected window. But if the selected window is not homogeneity, it carries out region growth using only the center pixel of selected window. So, the method can reduce remarkably the number of excessive segmented regions of image segmentation based on $H_T$. In order to show the validity of the proposed method, we carried out multiple experiments to compare the proposed method with previous method in same environment and conditions. As the results, the proposed method can reduce the number of segmented regions above 40% and doesn't make any difference in the quality of visual image when we compare with previous method. Especially, when we compare the image united with regions of descending order by size of segmented regions in experimentation with the previous method, even though the united image has regions more than 1,000, we can't recognize what the image means. However, in the proposed method, even though image is united by segmented regions less than 10, we can recognize what the image is. For these reason, we expect that the proposed method will be utilized in various fields, such as the extraction of objects, the retrieval of informations from the image, research for anatomy, biology, image visualization, and animation and so on.

Design and Implementation of High-dimensional Index Structure for the support of Concurrency Control (필터링에 기반한 고차원 색인구조의 동시성 제어기법의 설계 및 구현)

  • Lee, Yong-Ju;Chang, Jae-Woo;Kim, Hang-Young;Kim, Myung-Joon
    • The KIPS Transactions:PartD
    • /
    • v.10D no.1
    • /
    • pp.1-12
    • /
    • 2003
  • Recently, there have been many indexing schemes for multimedia data such as image, video data. But recent database applications, for example data mining and multimedia database, are required to support multi-user environment. In order for indexing schemes to be useful in multi-user environment, a concurrency control algorithm is required to handle it. So we propose a concurrency control algorithm that can be applied to CBF (cell-based filtering method), which uses the signature of the cell for alleviating the dimensional curse problem. In addition, we extend the SHORE storage system of Wisconsin university in order to handle high-dimensional data. This extended SHORE storage system provides conventional storage manager functions, guarantees the integrity of high-dimensional data and is flexible to the large scale of feature vectors for preventing the usage of large main memory. Finally, we implement the web-based image retrieval system by using the extended SHORE storage system. The key feature of this system is platform-independent access to the high-dimensional data as well as functionality of efficient content-based queries. Lastly. We evaluate an average response time of point query, range query and k-nearest query in terms of the number of threads.