• Title/Summary/Keyword: 변환 규칙

Search Result 449, Processing Time 0.022 seconds

Cognitive Approach for Building Intelligent Agent (지능 에이전트 구현의 인지적 접근)

  • Tae Kang-Soo
    • Journal of Internet Computing and Services
    • /
    • v.5 no.2
    • /
    • pp.97-105
    • /
    • 2004
  • The reason that an intelligent agent cannot understand the representation of its own perception or activity is caused by the traditional syntactic approach that translates a semantic feature into a simulated string, To implement an autonomously learning intelligent agent, Cohen introduces a experimentally semantic approach that the system learns a contentful representation of physical schema from physically interacting with environment using its own sensors and effectors. We propose that negation is a meta-level schema that enables an agent to recognize its own physical schema, To improve the planner's efficiency, Graphplan introduces the control rule that manipulates the inconsistency between planning operators, but it cannot cognitively understand negation and suffers from redundancy problem. By introducing a negative function not, IPP solves the problem, but its approach is still syntactic and is inefficient in terms of time and space. In this paper, we propose that, to represent a negative fact, a positive atom, which is called opposite concept, is a very efficient technique for implementing an cognitive agent, and demonstrate some empirical results supporting the hypothesis.

  • PDF

Data hub system based on SQL/XMDR message using Wrapper for distributed data interoperability (분산 데이터 상호운용을 위한 SQL/XMDR 메시지 기반의 Wrapper를 이용한 데이터 허브 시스템)

  • Moon, Seok-Jae;Jung, Gye-Dong;Choi, Young-Keun
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.11 no.11
    • /
    • pp.2047-2058
    • /
    • 2007
  • The business environment of enterprises could be difficult to obviate redundancy to filtrate data source occurred on data integrated to standard rules and meta-data and to produce integration of data and single viewer in geographical and spatial distributed environment. Specially, To can interchange various data from a heterogeneous system or various applications without types and forms and synchronize continually exactly integrated information#s is of paramount concern. Therefore data hub system based on SQL/XMDR message to overcome a problem of meaning interoperability occurred on exchanging or jointing between each legacy systems are proposed in this paper. This system use message mapping technique of query transform system to maintain data modified in real-time on cooperating data. It can consistently maintain data modified in realtime on exchanging or jointing data for cooperating legacy systems, it improve clarity and availability of data by providing a single interface on data retrieval.

The Linear Stability Derivatives by the Transient Maneuvering Method (과도응답법(過渡應答法)을 이용한 조종미계수(操縱微係數)의 추정(推定)에 관한 연구(硏究))

  • Seung-Keon,Lee
    • Bulletin of the Society of Naval Architects of Korea
    • /
    • v.27 no.3
    • /
    • pp.31-37
    • /
    • 1990
  • To obtain the values of linear stability derivatives, both analytical and experimental methods are now proposed and in use. The experimental method is well known as the planar motion mechanism(PMM) test. Its concept is to drive the model with a prescrived frequency and amplitude of the motion and pick up the hydrodynamic forces. But this kind of method is inconvenient in case we want to know the stability derivatives in wider range of the frequencies. So a different method is attempted that with one test run, we can get the derivatives in wider range of the frequencies. This technique forces the impulsive motion on the model, using the power of the oil pressure pump. This kind of method was originated by Scragg, C.A., Cummins, W.E, or Frank, T., This resarch is a further development of such preceding works. Todd's series 60(Cb=0.7) 2.00M model is chosen for the test and the results are compared with Van Leeuwen's famous PMM test results.

  • PDF

Improving development environment for embedded software (내장 소프트웨어를 위한 개발 환경의 개선)

  • AHN, ILSOO
    • Journal of Software Engineering Society
    • /
    • v.25 no.1
    • /
    • pp.1-9
    • /
    • 2012
  • RFID systems have been widely used in various fields such as logistics, distribution, food, security, traffic and others. A RFID middleware, one of the key components of the RFID system, perform an important role in many functions such as filtering, grouping, reporting tag data according to given user specifications and so on. However, manual test data generation is very hard because the inputs of the RFID middleware are generated according to the RFID middleware standards and complex encoding rules. To solve this problem, in this paper, we propose a black box test technique based on RFID middleware standards. Firstly, we define ten types of input conversion rules to generate new test data from existing test data based on the standard specifications. And then, using these input conversion rules, we generate various additional test data automatically. To validate the effectiveness of generated test data, we measure coverage of generated test data on actual RFID middleware. The results show that our test data achieve 78% statement coverage and 58% branch coverage in the classes of filtering and grouping, 79% statement coverage and 64% branch coverage in the classes of reporting.

  • PDF

Statistical Voice Activity Defector Based on Signal Subspace Model (신호 준공간 모델에 기반한 통계적 음성 검출기)

  • Ryu, Kwang-Chun;Kim, Dong-Kook
    • The Journal of the Acoustical Society of Korea
    • /
    • v.27 no.7
    • /
    • pp.372-378
    • /
    • 2008
  • Voice activity detectors (VAD) are important in wireless communication and speech signal processing, In the conventional VAD methods, an expression for the likelihood ratio test (LRT) based on statistical models is derived in discrete Fourier transform (DFT) domain, Then, speech or noise is decided by comparing the value of the expression with a threshold, This paper presents a new statistical VAD method based on a signal subspace approach, The probabilistic principal component analysis (PPCA) is employed to obtain a signal subspace model that incorporates probabilistic model of noisy signal to the signal subspace method, The proposed approach provides a novel decision rule based on LRT in the signal subspace domain, Experimental results show that the proposed signal subspace model based VAD method outperforms those based on the widely used Gaussian distribution in DFT domain.

Knowledge Mining from Many-valued Triadic Dataset based on Concept Hierarchy (개념계층구조를 기반으로 하는 다치 삼원 데이터집합의 지식 추출)

  • Suk-Hyung Hwang;Young-Ae Jung;Se-Woong Hwang
    • Journal of Platform Technology
    • /
    • v.12 no.3
    • /
    • pp.3-15
    • /
    • 2024
  • Knowledge mining is a research field that applies various techniques such as data modeling, information extraction, analysis, visualization, and result interpretation to find valuable knowledge from diverse large datasets. It plays a crucial role in transforming raw data into useful knowledge across various domains like business, healthcare, and scientific research etc. In this paper, we propose analytical techniques for performing knowledge discovery and data mining from various data by extending the Formal Concept Analysis method. It defines algorithms for representing diverse formats and structures of the data to be analyzed, including models such as many-valued data table data and triadic data table, as well as algorithms for data processing (dyadic scaling and flattening) and the construction of concept hierarchies and the extraction of association rules. The usefulness of the proposed technique is empirically demonstrated by conducting experiments applying the proposed method to public open data.

  • PDF

Research Trends in The Journal of Daesoon Academy of Sciences : 『The Journal of Daesoon』 Vol.1-Vol.25 (1996~2015) (『대순사상논총』의 연구 동향에 관한 연구- 『대순사상논총』 1집-25집(1996~2015) -)

  • Chang, In-ho
    • Journal of the Daesoon Academy of Sciences
    • /
    • v.27
    • /
    • pp.201-243
    • /
    • 2016
  • This paper analyzes the research trends from 358 scholarly articles published in the Journal of Daesoon Academy of Sciences from the first published journal in 1996 to the most recent journal published on the 25th of 2015 and proposes ideas for improvement. First of all, "The Journal of Daesoon Academy of Sciences" does not meet the standards required by the National Research Foundation, falling short of the most important conditions for the registration such as the periodicity and punctuality expected from academic journals. Furthermore, in terms of the Bibliometrical analysis, the number of articles published by the journal is decreasing and the consistency, with regards to rules and principles regulating publication details and bibliography formats, is nonexistent. Although various authors seemed to be meeting these criteria on the surface, the ratio of co-authored articles is too small. Securing researchers specializing in Daesoon Thought for expanding the size of the journal is important, but it is also important to diversify the research topics through exchanging ideas among researchers from various organizations. Here are some ideas for the improvement of the Journal of Daesoon Academy of Sciences: First, in order to meet the standards for punctuality and periodicity, it would be best to publish the journal twice a year with 12 to 15 articles. Second, the journal must become searchable through the creation of a database. Third, the key words and abstracts of articles must be written in Korean and English to facilitate the sharing of articles among researchers. Fourth, the journal must have a diverse and outstanding editorial board which takes into account the geographical situations of its board members. Fifth, the Journal must include articles on relevant topics that reflect the core topics of the Daesoon Thought and other studies. Sixth, articles must have a front page that contains bibliographical items to convey information to the reader. Seventh, it is essential that the journal have a clear publication date detailing the year, month, and day as well as a standard numbering scheme (i.e, Vol. and no).

Design of a Bit-Level Super-Systolic Array (비트 수준 슈퍼 시스톨릭 어레이의 설계)

  • Lee Jae-Jin;Song Gi-Yong
    • Journal of the Institute of Electronics Engineers of Korea SD
    • /
    • v.42 no.12
    • /
    • pp.45-52
    • /
    • 2005
  • A systolic array formed by interconnecting a set of identical data-processing cells in a uniform manner is a combination of an algorithm and a circuit that implements it, and is closely related conceptually to arithmetic pipeline. High-performance computation on a large array of cells has been an important feature of systolic array. To achieve even higher degree of concurrency, it is desirable to make cells of systolic array themselves systolic array as well. The structure of systolic array with its cells consisting of another systolic array is to be called super-systolic array. This paper proposes a scalable bit-level super-systolic amy which can be adopted in the VLSI design including regular interconnection and functional primitives that are typical for a systolic architecture. This architecture is focused on highly regular computational structures that avoids the need for a large number of global interconnection required in general VLSI implementation. A bit-level super-systolic FIR filter is selected as an example of bit-level super-systolic array. The derived bit-level super-systolic FIR filter has been modeled and simulated in RT level using VHDL, then synthesized using Synopsys Design Compiler based on Hynix $0.35{\mu}m$ cell library. Compared conventional word-level systolic array, the newly proposed bit-level super-systolic arrays are efficient when it comes to area and throughput.

Edge Enhanced Error Diffusion Halftoning Method Using Local Activity Measure (공간활성도를 이용한 에지 강조 오차확산법)

  • Kwak Nae-Joung;Ahn Jae-Hyeong
    • Journal of Korea Multimedia Society
    • /
    • v.8 no.3
    • /
    • pp.313-321
    • /
    • 2005
  • Digital halftoning is a process to produce a binary image so that the original image and its binary counterpart appear similar when observed from a distance. Among digital halftoning methods, error diffusion is a procedure for generating high quality bilevel images from continuous-tone images but blurs the edge information in the bilevel images. To solve this problem, we propose the improved error diffusion using local spatial information of the original images. Based on the fact that the human vision perceives not a pixel but local mean of input image, we compute edge enhancement information(EEI) by appling the ratio of a pixel and its adjacent pixels to local mean. The weights applied to local means is computed using the ratio of local activity measure(LAM) to the difference between input pixels of 3$\times$3 blocks and theirs mean. LAM is the measure of luminance changes in local regions and is obtained by adding the square of the difference between input pixels of 3$\times$3 blocks and theirs mean. We add the value to a input pixel of quantizer to enhance edge. The performance of the proposed method is compared with conventional methods by measuring the edge correlation. The halftone images by using the proposed method show better quality due to the enhanced edge. And the detailed edge is preserved in the halftone images by using the proposed method. Also the proposed method improves the quality of halftone images because unpleasant patterns for human visual system are reduced.

  • PDF

Improvements of an English Pronunciation Dictionary Generator Using DP-based Lexicon Pre-processing and Context-dependent Grapheme-to-phoneme MLP (DP 알고리즘에 의한 발음사전 전처리와 문맥종속 자소별 MLP를 이용한 영어 발음사전 생성기의 개선)

  • 김회린;문광식;이영직;정재호
    • The Journal of the Acoustical Society of Korea
    • /
    • v.18 no.5
    • /
    • pp.21-27
    • /
    • 1999
  • In this paper, we propose an improved MLP-based English pronunciation dictionary generator to apply to the variable vocabulary word recognizer. The variable vocabulary word recognizer can process any words specified in Korean word lexicon dynamically determined according to the current recognition task. To extend the ability of the system to task for English words, it is necessary to build a pronunciation dictionary generator to be able to process words not included in a predefined lexicon, such as proper nouns. In order to build the English pronunciation dictionary generator, we use context-dependent grapheme-to-phoneme multi-layer perceptron(MLP) architecture for each grapheme. To train each MLP, it is necessary to obtain grapheme-to-phoneme training data from general pronunciation dictionary. To automate the process, we use dynamic programming(DP) algorithm with some distance metrics. For training and testing the grapheme-to-phoneme MLPs, we use general English pronunciation dictionary with about 110 thousand words. With 26 MLPs each having 30 to 50 hidden nodes and the exception grapheme lexicon, we obtained the word accuracy of 72.8% for the 110 thousand words superior to rule-based method showing the word accuracy of 24.0%.

  • PDF