• Title/Summary/Keyword: 정보검색기법

Search Result 2,278, Processing Time 0.031 seconds

Wireless u-PC: Personal workspace on an Wireless Network Storage (Wireless u-PC : 무선 네트워크 스토리지를 이용한 개인 컴퓨팅 환경의 이동성을 지원하는 서비스)

  • Sung, Baek-Jae;Hwang, Min-Kyung;Kim, In-Jung;Lee, Woo-Joong;Park, Chan-Ik
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.14 no.9
    • /
    • pp.916-920
    • /
    • 2008
  • The personal workspace consists of user- specified computing environment such as user profile, applications and their configurations, and user data. Mobile computing devices (i.e., cellular phones, PDAs, laptop computers, and Ultra Mobile PC) are getting smaller and lighter to provide personal work-space ubiquitously. However, various personal work-space mobility solutions (c.f. VMWare Pocket ACE[1], Mojopac[2], u-PC[3], etc.) are appeared with the advance of virtualization technology and portable storage technology. The personal workspace can be loaded at public PC using above solutions. Especially, we proposed a framework called ubiquitous personal computing environment (u-PC) that supports mobility of personal workspace based on wireless iSCSI network storage in our previous work. However, previous u-PC could support limited applications, because it uses IRP (I/O Request Packet) forwarding technique at filter driver level on Windows operating system. In this paper, we implement OS-level virtualization technology using system call hooking on Windows operating system. It supports personal workspace mobility and covers previous u-PC limitation. Also, it overcomes personal workspace loading overhead that is limitation of other solutions (i.e., VMWare Pocket ACE, Mojopac, etc). We implement a prototype consisting of Windows XP-based host PC and Linux-based mobile device connected via WiNET protocol of UWB. We leverage several use~case models of our framework for proving its usability.

Comparison of Texture Images and Application of Template Matching for Geo-spatial Feature Analysis Based on Remote Sensing Data (원격탐사 자료 기반 지형공간 특성분석을 위한 텍스처 영상 비교와 템플레이트 정합의 적용)

  • Yoo Hee Young;Jeon So Hee;Lee Kiwon;Kwon Byung-Doo
    • Journal of the Korean earth science society
    • /
    • v.26 no.7
    • /
    • pp.683-690
    • /
    • 2005
  • As remote sensing imagery with high spatial resolution (e.g. pixel resolution of 1m or less) is used widely in the specific application domains, the requirements of advanced methods for this imagery are increasing. Among many applicable methods, the texture image analysis, which was characterized by the spatial distribution of the gray levels in a neighborhood, can be regarded as one useful method. In the texture image, we compared and analyzed different results according to various directions, kernel sizes, and parameter types for the GLCM algorithm. Then, we studied spatial feature characteristics within each result image. In addition, a template matching program which can search spatial patterns using template images selected from original and texture images was also embodied and applied. Probabilities were examined on the basis of the results. These results would anticipate effective applications for detecting and analyzing specific shaped geological or other complex features using high spatial resolution imagery.

Performance Comparison of Spatial Split Algorithms for Spatial Data Analysis on Spark (Spark 기반 공간 분석에서 공간 분할의 성능 비교)

  • Yang, Pyoung Woo;Yoo, Ki Hyun;Nam, Kwang Woo
    • Journal of Korean Society for Geospatial Information Science
    • /
    • v.25 no.1
    • /
    • pp.29-36
    • /
    • 2017
  • In this paper, we implement a spatial big data analysis prototype based on Spark which is an in-memory system and compares the performance by the spatial split algorithm on this basis. In cluster computing environments, big data is divided into blocks of a certain size order to balance the computing load of big data. Existing research showed that in the case of the Hadoop based spatial big data system, the split method by spatial is more effective than the general sequential split method. Hadoop based spatial data system stores raw data as it is in spatial-divided blocks. However, in the proposed Spark-based spatial analysis system, there is a difference that spatial data is converted into a memory data structure and stored in a spatial block for search efficiency. Therefore, in this paper, we propose an in-memory spatial big data prototype and a spatial split block storage method. Also, we compare the performance of existing spatial split algorithms in the proposed prototype. We presented an appropriate spatial split strategy with the Spark based big data system. In the experiment, we compared the query execution time of the spatial split algorithm, and confirmed that the BSP algorithm shows the best performance.

The Algorithm of Protein Spots Segmentation using Watersheds-based Hierarchical Threshold (Watersheds 기반 계층적 이진화를 이용한 단백질 반점 분할 알고리즘)

  • Kim Youngho;Kim JungJa;Kim Daehyun;Won Yonggwan
    • The KIPS Transactions:PartB
    • /
    • v.12B no.3 s.99
    • /
    • pp.239-246
    • /
    • 2005
  • Biologist must have to do 2DGE biological experiment for Protein Search and Analysis. This experiment coming into being 2 dimensional image. 2DGE (2D Gel Electrophoresis : two dimensional gel electrophoresis) image is the most widely used method for isolating of the objective protein by comparative analysis of the protein spot pattern in the gel plane. The process of protein spot analysis, firstly segment protein spots that are spread in 2D gel plane by image processing and can find important protein spots through comparative analysis with protein pattern of contrast group. In the algorithm which detect protein spots, previous 2DGE image analysis is applies gaussian fitting, however recently Watersheds region based segmentation algorithm, which is based on morphological segmentation is applied. Watersheds has the benefit that segment rapidly needed field in big sized image, however has under-segmentation and over-segmentation of spot area when gray level is continuous. The drawback was somewhat solved by marker point institution, but needs the split and merge process. This paper introduces a novel marker search of protein spots by watersheds-based hierarchical threshold, which can resolve the problem of marker-driven watersheds.

A Quality-Attribute-Driven Software Architecture Brokering Mechanism for Intelligent Service Robots (지능형 서비스 로봇을 위한 품질특성 기반의 소프트웨어 아키텍처 브로커링 방법)

  • Seo, Seung-Yeol;Koo, Hyung-Min;Ko, In-Young
    • Journal of KIISE:Software and Applications
    • /
    • v.36 no.1
    • /
    • pp.21-29
    • /
    • 2009
  • An intelligent service robot is a robot that monitors its surroundings, and then provides a service to meet a user's goal. It is normally impossible for a robot to anticipate all the needs of its user and various situations in the surroundings ahead, and to prepare for all the necessary functions to cope with them. Therefore, it is required to support the self-growing capability by which robots can extend their functionality based on users' needs and external conditions. In this paper, as an enabler of the self-growing capability, we propose a method that allows a robot to select a component-composition pattern represented in an architectural form (called a sub-architecture), and to extend its functionality by obtaining a set of software components that are prescribed in the pattern. Sub-architecture is selected and instantiated not only based on the functionality required but also based on quality requirements of a user and the surrounding environment. To provide this method, we constructed a quality-attributes-in-use ontology and developed a brokering mechanism that matches quality requirements of users and surroundings against quality attributes of sub-architectures. The ontology provides the common vocabularies to represent quality requirements and attributes, and enables the semantically-based reasoning in matching and instantiating appropriate sub-architectures in supporting services to users. This ontology-based approach contributes to provide a great flexibility in extending robot functionality based on available software components, and to narrow the gap between users' Quality requirements and the Quality of the actual services provided by a robot.

Design and Implementation of Communication Mechanism between External Educational Contents and LAMS (LAMS와 외부 교육용 콘텐츠간의 통신 메커니즘의 설계 및 구현)

  • Park, Chan;Jung, Seok-In;Han, Cheol-Dong;Seong, Dong-Ook;Yoo, Jae-Soo;Yoo, Kwan-Hee
    • The Journal of the Korea Contents Association
    • /
    • v.9 no.3
    • /
    • pp.361-371
    • /
    • 2009
  • LAMS(learning activity management system)[1] is one of the useful tools for designing and managing effectively the learning activities such as web search, chat, forum, grouping, and board. Even if LAMS has been upgraded to support the methods for making e-Learning contents conveniently, it does not have a method to communicate with external educational contents (EEC) made by external tools like Flash, Java, Visual C++, and so on. LAMS, which has been operated on Web environment, should manage all EECs like video and dynamic educational contents as educational contents in LAMS database. However, the current LAMS does not support the functionalities which can provide information of EECs to LAMS database and can also access any information about EECs from the database yet. In this paper, we propose the communication mechanism between the LAMS and EECs for solving the problem. In special, the mechanism makes many statistical data by using the information, and provides them for reflecting in education, and can control various learning management that was impossible under the original LAMS. Based on the proposed mechanism, teachers using LAMS can make more various educational contents and can manage them in the system.

Code Generation for Integrity Constraint Check in Objectivity/C++ (Objectivity/C++에서 무결성 제약조건 확인을 위한 코드 생성)

  • Kim, In-Tae;Kim, Gi-Chang;Yu, Sang-Bong;Cha, Sang-Gyun
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.5 no.4
    • /
    • pp.416-425
    • /
    • 1999
  • 복잡한 무결성 제약 조건을 효율적으로 확인하기 위해 제약 조건들을 룰 베이스(rule base)에 저장하고 별도의 룰 관리 시스템과 제약 조건 관리 시스템을 통해 제약 조건을 확인하는 기법이 많은 연구자들에 의해 연구되고 발표되었다. 그러나 제약 조건 관리 시스템이 실행시간에 응용 프로그램을 항상 모니터링하고 있다가 데이타의 수정이 요청될 때마다 개입하여 프로세스를 중단시키고 관련 제약 조건을 확인하는 기존의 방법들은 처리 시간의 지연을 피할 수 없다. 본 논문은 컴파일 시간에 제약 조건 확인 코드를 응용 프로그램에 미리 삽입할 것을 제안한다. 응용 프로그램 자체 내에 제약 조건 확인 코드가 삽입되기 때문에 실행 시간에 다른 시스템의 제어를 받지 않고 직접 제약 조건의 확인 및 데이타베이스의 접근이 가능해져서 처리 시간의 지연을 피할 수 있을 것이다. 이를 위해 어떤 구문이 제약 조건의 확인을 유발하는 지를 추적하였고, 컴파일러가 그러한 구문을 어떻게 전처리 과정에서 검색하는지 그리고 그러한 구문마다 어떻게 해당 제약 조건 확인 코드를 삽입할 수 있는 지를 객체지향1) 데이타베이스 언어인 Objectivity/C++에 대해 gcc의 YACC 코드를 변경함으로써 보여 주었다.Abstract To cope with the complexity of handling integrity constraints, numerous researchers have suggested to use a rule-based system, where integrity constraints are expressed as rules and stored in a rule base. A rule manager and an integrity constraint manager cooperate to check the integrity constraints efficiently. In this approach, however, the integrity constraint manager has to monitor the activity of an application program constantly to catch any database operation. For each database operation, it has to check relevant rules with the help of the rule manager, resulting in considerable delays in database access. We propose to insert the constraints checking code in the application program directly at compile time. With constraints checking code inserted, the application program can check integrity constraints by itself without the intervention of the integrity constraint manager. We investigate what kind of statements require the checking of constraints, show how the compiler can detect those statements, and show how constraints checking code can be inserted into the program, by modifying the GCC YACC file for Objectivity/C++, an object-oriented database programming language.

Performance Tests of 3D Data Models for Laser Radar Simulation (레이저레이더 시뮬레이션을 위한 3차원 데이터 모델의 성능 테스트)

  • Kim, Geun-Han;Kim, Hye-Young;Jun, Chul-Min
    • Journal of Korean Society for Geospatial Information Science
    • /
    • v.17 no.3
    • /
    • pp.97-107
    • /
    • 2009
  • Experiments using real guided weapons for the development of the LADAR(Laser radar) are not practical. Therefore, we need computing environment that can simulate the 3D detections by LADAR. Such simulations require dealing with large sized data representing buildings and terrain over large area. And they also need the information of 3D target objects, for example, material and echo rate of building walls. However, currently used 3D models are mostly focused on visualization maintained as file-based formats and do not contain such semantic information. In this study, as a solution to these problems, a method to use a spatial DBMS and a 3D model suitable for LADAR simulation is suggested. The 3D models found in previous studies are developed to serve different purposes, thus, it is not easy to choose one among them which is optimized for LADAR simulation. In this study, 4 representative 3D models are first defined, each of which are tested for different performance scenarios. As a result, one model, "Body-Face", is selected as being the most suitable model for the simulation. Using this model, a test simulation is carried out.

  • PDF

An Efficient Top-k Query Processing Algorithm over Encrypted Outsourced-Data in the Cloud (아웃소싱 암호화 데이터에 대한 효율적인 Top-k 질의 처리 알고리즘)

  • Kim, Jong Wook;Suh, Young-Kyoon
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.4 no.12
    • /
    • pp.543-548
    • /
    • 2015
  • Recently top-k query processing has been extremely important along with the explosion of data produced by a variety of applications. Top-k queries return the best k results ordered by a user-provided monotone scoring function. As cloud computing service has been getting more popular than ever, a hot attention has been paid to cloud-based data outsourcing in which clients' data are stored and managed by the cloud. The cloud-based data outsourcing, though, exposes a critical secuity concern of sensitive data, resulting in the misuse of unauthorized users. Hence it is essential to encrypt sensitive data before outsourcing the data to the cloud. However, there has been little attention to efficient top-k processing on the encrypted cloud data. In this paper we propose a novel top-k processing algorithm that can efficiently process a large amount of encrypted data in the cloud. The main idea of the algorithm is to prune unpromising intermediate results at the early phase without decrypting the encrypted data by leveraging an order-preserving encrypted technique. Experiment results show that the proposed top-k processing algorithm significantly reduces the overhead of client systems from 10X to 10000X.

AST-AET Data Migration Strategy considering Characteristics of Temporal Data (시간지원 데이터의 특성을 고려한 AST-AET 데이터 이동 기법)

  • Yun, Hong-Won;Gim, Gyong-Sok
    • Journal of KIISE:Databases
    • /
    • v.28 no.3
    • /
    • pp.384-394
    • /
    • 2001
  • In this paper, we propose AST-AET(Average valid Start Time-Average valid End Time) data migration strategy based on the storage structure where temporal data is divided into a past segment, a current segment, and a future segment. We define AST and AET which are used in AST-AET data migration strategy and also define entity versions to be stored in each segment. We describe methods to compute AST and AET, and processes to search entity versions for migration and move them. We compare average response times for user queries between AST-AET data migration strategy and the existing LST-GET(Least valid Start Time-Greatest valid End Time) data migration strategy. The experimental results show that, when there are no LLTs(Long Lived Tuples), there is little difference in performance between the two migration strategies because the size of a current segment is nearly equal. However, when there are LLTs, the average response time of AST-AET data migration strategy is smaller than that of LST-GET data migration strategy because the size of a current segment of LST-GET data migration strategy becomes larger. In addition, when we change average interarrival times of temporal queries, generally the average response time of AST-AET data migration strategy is smaller than that of LST-GET data migration strategy.

  • PDF