• Title/Summary/Keyword: Enumerative Method

Search Result 13, Processing Time 0.023 seconds

Synthesizing Imperative Programs from Examples (예제로부터 명령형 프로그램을 합성하는 방법)

  • So, Sunbeom;Choi, Tae-Hyoung;Jung, Jun;Oh, Hakjoo
    • Journal of KIISE
    • /
    • v.44 no.9
    • /
    • pp.986-991
    • /
    • 2017
  • In this paper, we present a method for synthesizing imperative programs from input-output examples. Given (1) a set of input-output examples, (2) an incomplete program, and (3) variables and integer constants to be used, the synthesizer outputs a complete program that satisfies all of the given examples. The basic synthesis algorithm enumerates all possible candidate programs until the solution program is found (enumerative search). However, it is too slow for practical use due to the huge search space. To accelerate the search speed, our approach uses code optimization and avoids unnecessary search for the programs that are syntactically different but semantically equivalent. We have evaluated our synthesis algorithm on 20 introductory programming problems, and the results show that our method improves the speed of the basic algorithm by 10x on average.

Quasiconcave Bilevel Programming Problem

  • Arora S.R.;Gaur Anuradha
    • Management Science and Financial Engineering
    • /
    • v.12 no.1
    • /
    • pp.113-125
    • /
    • 2006
  • Bilevel programming problem is a two-stage optimization problem where the constraint region of the first level problem is implicitly determined by another optimization problem. In this paper we consider the bilevel quadratic/linear fractional programming problem in which the objective function of the first level is quasiconcave, the objective function of the second level is linear fractional and the feasible region is a convex polyhedron. Considering the relationship between feasible solutions to the problem and bases of the coefficient submatrix associated to variables of the second level, an enumerative algorithm is proposed which finds a global optimum to the problem.

An Analysis of the Structural Characteristics of the UDC Standard Edition (UDC 표준판의 구조적 특성 분석)

  • Lee, Chang-Soo
    • Journal of Korean Library and Information Science Society
    • /
    • v.39 no.3
    • /
    • pp.299-320
    • /
    • 2008
  • This study examined the historical background and structural characteristics of the UDC(Universal Decimal Classification) standard edition which has been created from the entire content of the Master Reference File database. We made a comparison of the structural characteristics between UDC standard edition and Korean abridged edition. UDC is a hybrid of two kinds of documentary classification scheme, that is enumerative and analytico-synthetic, and its structure reflects this feature. It is found that UDC standard edition extended the universality and synthetic method using its auxiliary tables compare to Korean abridged edition.

  • PDF

The Study of Class Library Design for Reusable Object-Oriented Software (객체지향 소프트웨어 재사용을 위한 클래스 라이브러리 설계에 관한 연구)

  • Lee, Hae-Won;Kim, Jin-Seok;Kim, Hye-Gyu;Ha, Su-Cheol
    • The Transactions of the Korea Information Processing Society
    • /
    • v.6 no.9
    • /
    • pp.2350-2364
    • /
    • 1999
  • In this paper, we propose a method of class library repository design for provide reuser the object-oriented C++ class component. To class library design, we started by studying the characteristics of a reusable component. We formally defined the reusable component model using an entity relationship model. This formal definition has been directly used as the database schema for storing the reusable component in a repository. The reusable class library may be considered a knowledge base for software reuse. Thus, we used that Enumerative classification of breakdown of knowledge based. And another used classification is clustering of based on class similarity. The class similarity composes member function similarity and member data similarity. Finally, we have designed class library for hierarchical inheritance mechanism of object-oriented concept Generalization, Specialization and Aggregation.

  • PDF

CC의 구조적 분석을 통한 분류자동화 원리유도

  • 이경호
    • Journal of Korean Library and Information Science Society
    • /
    • v.15
    • /
    • pp.113-151
    • /
    • 1988
  • The enumerative classification schemes do not represent the tiny mass of knowledge embodied in a article in a periodical or in a chapter or a paragraph of a book. But today's information centers regard a tiny spot of knowledge embodied in a article as a class. we call this micro-thought. But the enumerative classification are manual systems, it cannot be a n.0, pplied to the automation of classification. The purpose of this study is to build a general principle for the automatic book-classification which can be put to use in library operation, and to present a methodology of the automatic classification for the library. The methodology used is essentially theoretical, Published works by and about Ranganathan, especially 6th edition of the CC were studied, analyzed. The principle of automatic book classification derived from the analysis of colon classification and facet combinations. The results of this study can be summarized as follows ; (1) This study confined the fields of library science and agriculture. (2) This study represent a general principles for the automatic book classification of library science and agriculture. (3) Program flowcharts are designed as a basis of system analysis and program procedure in library science and agriculture. (4) The principles of the automatic classification in library, science is different from that of agriculture. (5) Automatic book classification can be performed by the principle of faceted classification, by inputting the title and subject code into a computer. In addition, the expected value from the automatic book-classification is as follows (1) The prompt and accurate of classification is possible. (2) Though a book is classified in any library, it can have same classification number. (3) The user can retrieve the classification code of a book for which he or she wants to search through dialogue with the computer. (4) Since the concept coordination method is employed, a tiny mass of knowledge embodied in a article in a periodical or in a chapter or a paragraph of a book can be represented as a class. (5) By performing automatic book-classification, the automation of library operation can be completed.

  • PDF

Transformation of Text Contents of Engineering Documents into an XML Document by using a Technique of Document Structure Extraction (문서구조 추출기법을 이용한 엔지니어링 문서 텍스트 정보의 XML 변환)

  • Lee, Sang-Ho;Park, Junwon;Park, Sang Il;Kim, Bong-Geun
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.31 no.6D
    • /
    • pp.849-856
    • /
    • 2011
  • This paper proposes a method for transforming unstructured text contents of engineering documents, which have complex hierarchical structure of subtitles with various heading symbols, into a semi-structured XML document according to the hierarchical subtitle structure. In order to extract the hierarchical structure from plain text information, this study employed a method of document structure extraction which is an analysis technique of the document structure. In addition, a method for processing enumerative text contents was developed to increase overall accuracy during extraction of the subtitles and construction of a hierarchical subtitle structure. An application module was developed based on the proposed method, and the performance of the module was evaluated with 40 test documents containing structural calculation records of bridges. The first test group of 20 documents related to the superstructure of steel girder bridges as applied in a previous study and they were used to verify the enhanced performance of the proposed method. The test results show that the new module guarantees an increase in accuracy and reliability in comparison with the test results of the previous study. The remaining 20 test documents were used to evaluate the applicability of the method. The final mean value of accuracy exceeded 99%, and the standard deviation was 1.52. The final results demonstrate that the proposed method can be applied to diverse heading symbols in various types of engineering documents to represent the hierarchical subtitle structure in a semi-structured XML document.

A Study on the Revision of UDC Korean Edition (UDC 한국어판의 개정에 관한 연구)

  • Lee, Chang-Soo
    • Journal of Information Management
    • /
    • v.41 no.3
    • /
    • pp.1-26
    • /
    • 2010
  • The purpose of this study is to compare and analyze the print form of UDC(Universal Decimal Classification) standard edition which was published by British Standards Institution in 2005 to Korean edition which was published by Korea Scientific & Technological Information Center in 1973, and to suggest the appropriate revision directions of future Korean edition. This study suggests that the future Korean edition should be revised based on the Master Reference File and should be a print edition which is composed of systematic tables and alphabetical index from the standard edition. In addition, the future Korean edition needs to strengthen international universality and to extend synthetic method using its auxiliary tables.

Principles of the Automatic Book-Classification (도서분류자동화 원리유도에 관한 연구)

  • 심의순;이경호
    • Journal of Korean Library and Information Science Society
    • /
    • v.11
    • /
    • pp.175-209
    • /
    • 1984
  • The purpose of this study is to build a general principle for the automatic book-classification which can be put to use in library operation, and to present a methodology of the automatic classification for the library. Since the enumerative classification scheme which exist as manual systems cannot be a n.0, pplied to the automation of classification, the principles of Colon Classification by S.R. Ranganathan is brought in and studied. The result of the study can be summarized as follows: (1) Automatic book-classification can be performed by the principles of faceted classification. (2) This study presents a general and an a n.0, pplication principles for the automatic book-classification. (3) File design for the automatic book-classification of a general classification is different from that of special classification, (4) The methodology is to classify the literature by inputting the title into a terminal. In addition, the expected Value from the Automatic Book-classification is as follows: (1) The prompt and accurate process of classification is possible. (2) Though a book is classified in any library it can have the same classification number. (3) The user can retrieve the classification code of a book for which he or she wants to search through the dialogue with the computer. (4) Since the concept coordination method is employed, even the representing of a multi-subject concept is made simple. (5) By performing automatic book-classification, the automation of library operation can be completed.

  • PDF

A Cloud-Edge Collaborative Computing Task Scheduling and Resource Allocation Algorithm for Energy Internet Environment

  • Song, Xin;Wang, Yue;Xie, Zhigang;Xia, Lin
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.15 no.6
    • /
    • pp.2282-2303
    • /
    • 2021
  • To solve the problems of heavy computing load and system transmission pressure in energy internet (EI), we establish a three-tier cloud-edge integrated EI network based on a cloud-edge collaborative computing to achieve the tradeoff between energy consumption and the system delay. A joint optimization problem for resource allocation and task offloading in the threetier cloud-edge integrated EI network is formulated to minimize the total system cost under the constraints of the task scheduling binary variables of each sensor node, the maximum uplink transmit power of each sensor node, the limited computation capability of the sensor node and the maximum computation resource of each edge server, which is a Mixed Integer Non-linear Programming (MINLP) problem. To solve the problem, we propose a joint task offloading and resource allocation algorithm (JTOARA), which is decomposed into three subproblems including the uplink transmission power allocation sub-problem, the computation resource allocation sub-problem, and the offloading scheme selection subproblem. Then, the power allocation of each sensor node is achieved by bisection search algorithm, which has a fast convergence. While the computation resource allocation is derived by line optimization method and convex optimization theory. Finally, to achieve the optimal task offloading, we propose a cloud-edge collaborative computation offloading schemes based on game theory and prove the existence of Nash Equilibrium. The simulation results demonstrate that our proposed algorithm can improve output performance as comparing with the conventional algorithms, and its performance is close to the that of the enumerative algorithm.

THE PROBABILISTIC METHOD MEETS GO

  • Farr, Graham
    • Journal of the Korean Mathematical Society
    • /
    • v.54 no.4
    • /
    • pp.1121-1148
    • /
    • 2017
  • Go is an ancient game of great complexity and has a huge following in East Asia. It is also very rich mathematically, and can be played on any graph, although it is usually played on a square lattice. As with any game, one of the most fundamental problems is to determine the number of legal positions, or the probability that a random position is legal. A random Go position is generated using a model previously studied by the author, with each vertex being independently Black, White or Uncoloured with probabilities q, q, 1 - 2q respectively. In this paper we consider the probability of legality for two scenarios. Firstly, for an $N{\times}N$ square lattice graph, we show that, with $q=cN^{-{\alpha}}$ and c and ${\alpha}$ constant, as $N{\rightarrow}{\infty}$ the limiting probability of legality is 0, exp($-2c^5$), and 1 according as ${\alpha}$ < 2/5, ${\alpha}=2/5$ and ${\alpha}$ > 2/5 respectively. On the way, we investigate the behaviour of the number of captured chains (or chromons). Secondly, for a random graph on n vertices with edge probability p generated according to the classical $Gilbert-Erd{\ddot{o}}s-R{\acute{e}}nyi$ model ${\mathcal{G}}$(n; p), we classify the main situations according to their asymptotic almost sure legality or illegality. Our results draw on a variety of probabilistic and enumerative methods including linearity of expectation, second moment method, factorial moments, polyomino enumeration, giant components in random graphs, and typicality of random structures. We conclude with suggestions for further work.