• Title/Summary/Keyword: distributed representation

Search Result 151, Processing Time 0.037 seconds

Determination of Parameter Value in Constraint of Sparse Spectrum Fitting DOA Estimation Algorithm (희소성 스펙트럼 피팅 도래각 추정 알고리즘의 제한조건에 포함된 상수 결정법)

  • Cho, Yunseung;Paik, Ji-Woong;Lee, Joon-Ho
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.41 no.8
    • /
    • pp.917-920
    • /
    • 2016
  • SpSF algorithm is direction-of-arrival estimation algorithm based on sparse representation of incident signlas. Cost function to be optimized for DOA estimation is multi-dimensional nonlinear function, which is hard to handle for optimization. After some manipulation, the problem can be cast into convex optimiztion problem. Convex optimization problem tuns out to be constrained optimization problem, where the parameter in the constraint has to be determined. The solution of the convex optimization problem is dependent on the specific parameter value in the constraint. In this paper, we propose a rule-of-thumb for determining the parameter value in the constraint. Based on the fact that the noise in the array elements is complex Gaussian distributed with zero mean, the average of the Frobenius norm of the matrix in the constraint can be rigorously derived. The parameter in the constrint is set to be two times the average of the Frobenius norm of the matrix in the constraint. It is shown that the SpSF algorithm actually works with the parameter value set by the method proposed in this paper.

Implementation of CORBA based Spatial Data Provider for Interoperability (상호운용을 지원하는 코바 기반 공간 데이터 제공자의 설계 및 구현)

  • Kim, Min-Seok;An, Kyoung-Hwan;Hong, Bong-Hee
    • Journal of Korea Spatial Information System Society
    • /
    • v.1 no.2 s.2
    • /
    • pp.33-46
    • /
    • 1999
  • In distributed computing platforms like CORBA, wrappers are used to integrate heterogeneous systems or databases. A spatial data provider is one of the wrappers because it provides clients with uniform access interfaces to diverse data sources. The individual implementation of spatial data providers for each of different data sources is not efficient because of redundant coding of the wrapper modules. This paper presents a new architecture of the spatial data provider which consists of two layered objects : independent wrapper components and dependent wrapper components. Independent wrapper components would be reused for implementing a new data provider for a new data source, which dependent wrapper components should be newly coded for every data source. This paper furthermore discussed the issues of implementing the representation of query results in the middleware. There are two methods of keeping query results in the middleware. One is to keep query results as non-CORBA objects and the other is to transform query results into CORBA objects. The evaluation of the above two methods shows that the cost of making CORBA objects is very expensive.

  • PDF

Grid Based Nonpoint Source Pollution Load Modelling

  • Niaraki, Abolghasem Sadeghi;Park, Jae-Min;Kim, Kye-Hyun;Lee, Chul-Yong
    • 한국공간정보시스템학회:학술대회논문집
    • /
    • 2007.06a
    • /
    • pp.246-251
    • /
    • 2007
  • The purpose of this study is to develop a grid based model for calculating the critical nonpoint source (NPS) pollution load (BOD, TN, TP) in Nak-dong area in South Korea. In the last two decades, NPS pollution has become a topic for research that resulted in the development of numerous modeling techniques. Watershed researchers need to be able to emphasis on the characterization of water quality, including NPS pollution loads estimates. Geographic Information System (GIS) has been designed for the assessment of NPS pollution in a watershed. It uses different data such as DEM, precipitation, stream network, discharge, and land use data sets and utilizes a grid representation of a watershed for the approximation of average annual pollution loads and concentrations. The difficulty in traditional NPS modeling is the problem of identifying sources and quantifying the loads. This research is intended to investigate the correlation of NPS pollution concentrations with land uses in a watershed by calculating Expected Mean Concentrations (EMC). This work was accomplished using a grid based modelling technique that encompasses three stages. The first step includes estimating runoff grid by means of the precipitation grid and runoff coefficient. The second step is deriving the gird based model for calculating NPS pollution loads. The last step is validating the gird based model with traditional pollution loads calculation by applying statistical t-test method. The results on real data, illustrate the merits of the grid based modelling approach. Therefore, this model investigates a method of estimating and simulating point loads along with the spatially distributed NPS pollution loads. The pollutant concentration from local runoff is supposed to be directly related to land use in the region and is not considered to vary from event to event or within areas of similar land uses. By consideration of this point, it is anticipated that a single mean estimated pollutant concentration is assigned to all land uses rather than taking into account unique concentrations for different soil types, crops, and so on.

  • PDF

Network Time Protocol Extension for Wireless Sensor Networks (무선 센서 네트워크를 위한 인터넷 시각 동기 프로토콜 확장)

  • Hwang, So-Young
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.15 no.12
    • /
    • pp.2563-2567
    • /
    • 2011
  • Advances in smart sensors, embedded systems, low-power design, ad-hoc networks and MEMS have allowed the development of low-cost small sensor nodes with computation and wireless communication capabilities that can form distributed wireless sensor networks. Time information and time synchronization are fundamental building blocks in wireless sensor networks since many sensor network applications need time information for object tracking, consistent state updates, duplicate detection and temporal order delivery. Various time synchronization protocols have been proposed for sensor networks because of the characteristics of sensor networks which have limited computing power and resources. However, none of these protocols have been designed with time representation scheme in mind. Global time format such as UTC TOD (Universal Time Coordinated, Time Of Day) is very useful in sensor network applications. In this paper we propose network time protocol extension for global time presentation in wireless sensor networks.

A JCML and a GUI-based Editor for Specifying Job Control Flow on Grid (그리드에서 작업 흐름을 효과적으로 제어하기 위한 JCML과 GUI 기반의 편집기)

  • 황석찬;최재영;이상산
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.31 no.3_4
    • /
    • pp.152-159
    • /
    • 2004
  • The Grid system is an emerging computing infrastructure that will substitute for existing distributed systems. However end users have a difficulty in using the Grid because of its complicated usage, which is an inherent characteristic from the heterogeneous mechanism of the Grid. In this paper, we present the JCML(Job Control Markup Language) and its GUI-based editor, which not only provide users with ease of use, improved working environment, but assist users to execute their jobs efficiently The JCML is a job control language that improves the RSL of Globus, which defines global services in Grid. The JCML is designed to support flexibility among various Grid services using standard XML. And it makes use of a graph representation method, GXL(Graph eXchange Language), to specify detailed job properties and dependencies among jobs using nodes and edges. The JCML editor provides users with GUI-based interface. With the JCML editor, a complicated job order can be easily completed using very simple manipulations with a mouse, such as a drag-and-drop.

Multilingual Product Retrieval Agent through Semantic Web and Semantic Networks (Semantic Web과 Semantic Network을 활용한 다국어 상품검색 에이전트)

  • Moon Yoo-Jin
    • Journal of Intelligence and Information Systems
    • /
    • v.10 no.2
    • /
    • pp.1-13
    • /
    • 2004
  • This paper presents a method for the multilingual product retrieval agent through XML and the semantic networks in e-commerce. Retrieval for products is an important process, since it represents interfaces of the customer contact to the e-commerce. Keyword-based retrieval is efficient as long as the product information is structured and organized. But when the product information is expressed across many online shopping malls, especially when it is expressed in different languages with cultural backgrounds, buyers' product retrieval needs language translation with ambiguities resolved in a specific context. This paper presents a RDF modeling case that resolves semantic problems in the representation of product information and across the boundaries of language domains. With adoption of UNSPSC code system, this paper designs and implements an architecture for the multilingual product retrieval agents. The architecture is based on the central repository model of product catalog management with distributed updating processes. It also includes the perspectives of buyers and suppliers. And the consistency and version management of product information are controlled by UNSPSC code system. The multilingual product names are resolved by semantic networks, thesaurus and ontology dictionary for product names.

  • PDF

A Keyword-based Filtering Technique of Document-centric XML using NFA Representation (NFA 표현을 사용한 문서-중심적 XML의 키워드 기반 필터링 기법)

  • Lee, Kyoung-Han;Park, Seog
    • Journal of KIISE:Databases
    • /
    • v.33 no.5
    • /
    • pp.437-452
    • /
    • 2006
  • In this paper, we propose an extended XPath specification which includes a special matching character '%' used in the LIKE operation of SQL in order to solve the difficulty of writing some queries to filter element contents well, using the previous XPath specification. We also present a novel technique for filtering a collection of document-centric XMLs, called Pfilter, which is able to exploit the extended XPath specification. Owing to sharing the common prefix characters of the operands in value-based predicates, the Pfilter improves the performance in processing those. We show several performance studies, comparing Pfilter with Yfilter in respect to efficiency and scalability as using multi-query processing time (MQPT), and reporting the results with respect to inserting, deleting, and processing of value-based predicates. In conclusion, our approach provides a core algorithm for evaluating the contains() function of XPath queries in previous XML filtering researches, and a foundation for building XML-based distributed information systems.

Semantic Web Ontology for Research Community (국가과학기술 R&D 기반정보 온톨로지)

  • Kang, In-Su;Jung, Han-Min;Lee, Seung-Woo;Kim, Pyung;Sung, Won-Kyung
    • Proceedings of the Korea Contents Association Conference
    • /
    • 2006.05a
    • /
    • pp.231-234
    • /
    • 2006
  • Semantic web ontologies can be viewed as logic-based domain-oriented contents which allow distributed and heterogeneous information to be semantically integrated, automatically circulated, and enable implicit knowledge to be reasoned. This paper describes the 'Science and Technology Research Area' ontology which is being developed by the Korea Institute of Science and Technology Information (KISTI). This ontology was defined to assist actual researchers and project planners to grasp the researchers community from a variety of viewpoints. We describe classes and properties as ontology components and exemplify the representation of real instances in the ontology. In order to represent the identities of real world instances within the ontology, the above ontology employs both class-dependent URI assignment schemes and the identity resolution methods.

  • PDF

Representation of Interactions in Data Model for Hybrid Structural Experiments (하이브리드 구조실험을 위한 데이터 모델에서의 상호작용의 표현)

  • Lee, Chang-Ho
    • Journal of the Computational Structural Engineering Institute of Korea
    • /
    • v.23 no.2
    • /
    • pp.123-137
    • /
    • 2010
  • The hybrid structural experiment decomposes a structure into independent substructures that can be tested or simulated. The substructures being tested or simulated may be distributed at different facilities of different locations, and are managed by the simulation coordinator. There exist interactions among the simulation coordinator and the substructures since they give and receive the commands and feedbacks during the experimental process. These interactions are described in this paper for an example hybrid structural experiment using the classes and objects in the Lehigh Model which is one of the data models for structural experiments. The simulation coordinator and the substructures have the objects for the interaction data files, and are linked together through the same types of the interface links. The objects for the interactions presented in this paper can be implemented in a consistent way, and be used for developing the computer system for the hybrid structural experiments.

Variable Radix-Two Multibit Coding and Its VLSI Implementation of DCT/IDCT (가변길이 다중비트 코딩을 이용한 DCT/IDCT의 설계)

  • 김대원;최준림
    • Journal of the Institute of Electronics Engineers of Korea SD
    • /
    • v.39 no.12
    • /
    • pp.1062-1070
    • /
    • 2002
  • In this paper, variable radix-two multibit coding algorithm is presented and applied in the implementation of discrete cosine transform(DCT) and inverse discrete cosine transform(IDCT). Variable radix-two multibit coding means the 2k SD (signed digit) representation of overlapped multibit scanning with variable shift method. SD represented by 2k generates partial products, which can be easily implemented with shifters and adders. This algorithm is most powerful for the hardware implementation of DCT/IDCT with constant coefficient matrix multiplication. This paper introduces the suggested algorithm, it's proof and the implementation of DCT/IDCT The implemented IDCT chip with 8 PEs(Processing Elements) and one transpose memory runs at a tate of 400 Mpixels/sec at 54MHz frequency for high speed parallel signal processing, and it's verified in HDTV and MPEG decoder.