• Title/Summary/Keyword: Constraint database

Search Result 77, Processing Time 0.033 seconds

Optimum seismic design of reinforced concrete frame structures

  • Gharehbaghi, Sadjad;Moustafa, Abbas;Salajegheh, Eysa
    • Computers and Concrete
    • /
    • v.17 no.6
    • /
    • pp.761-786
    • /
    • 2016
  • This paper proposes an automated procedure for optimum seismic design of reinforced concrete (RC) frame structures. This procedure combines a smart pre-processing using a Tree Classification Method (TCM) and a nonlinear optimization technique. First, the TCM automatically creates sections database and assigns sections to structural members. Subsequently, a real valued model of Particle Swarm Optimization (PSO) algorithm is employed in solving the optimization problem. Numerical examples on design optimization of three low- to high-rise RC frame structures under earthquake loads are presented with and without considering strong column-weak beam (SCWB) constraint. Results demonstrate the effectiveness of the TCMin seismic design optimization of the structures.

Nonlinear Elastic Optimal Design Using Genetic Algorithm (유전자 알고리즘을 이용한 비선형 탄성 최적설계)

  • Kim, Seung Eock;Ma, Sang Soo
    • Journal of Korean Society of Steel Construction
    • /
    • v.15 no.2
    • /
    • pp.197-206
    • /
    • 2003
  • The optimal design method in cooperation with a nonlinear elastic analysis method was presented. The proposed nonlinear elastic method overcame the drawback of the conventional LRFD method this approximately accounts for the nonlinear effect caused by using the moment amplification factors of and. The genetic algorithm uses a procedure based on the Darwinian notions of the survival of the fittest, where selection, crossover, and mutation operators are used to look for high performance among the sections of the database. They satisfy constraint functions and give the lightest weight to the structure. The objective function was set to the total weight of the steel structure. The constraint functions were load-carrying capacities, serviceability, and ductility requirement. Case studies for a two-dimensional frame, a three-dimensional frame, and a three-dimensional steel arch bridge were likewise presented.

A Study on the Development of Tourism Itinerary Using Historical and Cultural Resources Information System (역사.문화자원 정보시스템을 활용한 관광코스 개발 방안에 관한 연구)

  • 이우종
    • Spatial Information Research
    • /
    • v.8 no.2
    • /
    • pp.191-201
    • /
    • 2000
  • Recently, tourism industry is on the trend of expanding its activity arena, boosted by revitalization of regional economy and reorganization of urban environment, undertaken for the purpose of place marketing. However, Korea has not been able to develop a variety of tourism itineraries that can attract the interest of tourists because most tourism resources are isolated with single function. This constraint has prevented tourism industry in Korea from attaining competitiveness. In attempting to overcome this constraint, this study seeks to develop tourism itineraries by constructing and utilizing database on historical and cultural tourism resources based on the case of Kangneung City. Through the construction of information system on historical and cultural resources and tourism support facilities, optimal tourism itineraries that take into account maintenance conditions, types, values, lodging availabilities and accessibility of tourism resource areas can be developed. Moreover, it is expected that this information system will have a high level of usage in the future, for it can provide a wide range of tourism services on the internet and education materials as well as facilitate resources management.

  • PDF

Object Modeling for Mapping from XML Document and Query to UML Class Diagram based on XML-GDM (XML-GDM을 기반으로 한 UML 클래스 다이어그램으로 사상을 위한 XML문서와 질의의 객체 모델링)

  • Park, Dae-Hyun;Kim, Yong-Sung
    • The KIPS Transactions:PartD
    • /
    • v.17D no.2
    • /
    • pp.129-146
    • /
    • 2010
  • Nowadays, XML has been favored by many companies internally and externally as a means of sharing and distributing data. there are many researches and systems for modeling and storing XML documents by an object-oriented method as for the method of saving and managing web-based multimedia document more easily. The representative tool for the object-oriented modeling of XML documents is UML (Unified Modeling Language). UML at the beginning was used as the integrated methodology for software development, but now it is used more frequently as the modeling language of various objects. Currently, UML supports various diagrams for object-oriented analysis and design like class diagram and is widely used as a tool of creating various database schema and object-oriented codes from them. This paper proposes an Efficinet Query Modelling of XML-GL using the UML class diagram and OCL for searching XML document which its application scope is widely extended due to the increased use of WWW and its flexible and open nature. In order to accomplish this, we propose the modeling rules and algorithm that map XML-GL. which has the modeling function for XML document and DTD and the graphical query function about that. In order to describe precisely about the constraint of model component, it is defined by OCL (Object Constraint Language). By using proposed technique creates a query for the XML document of holding various properties of object-oriented model by modeling the XML-GL query from XML document, XML DTD, and XML query while using the class diagram of UML. By converting, saving and managing XML document visually into the object-oriented graphic data model, user can prepare the base that can express the search and query on XML document intuitively and visually. As compared to existing XML-based query languages, it has various object-oriented characteristics and uses the UML notation that is widely used as object modeling tool. Hence, user can construct graphical and intuitive queries on XML-based web document without learning a new query language. By using the same modeling tool, UML class diagram on XML document content, query syntax and semantics, it allows consistently performing all the processes such as searching and saving XML document from/to object-oriented database.

Efficient Mining of Frequent Subgraph with Connectivity Constraint

  • Moon, Hyun-S.;Lee, Kwang-H.;Lee, Do-Heon
    • Proceedings of the Korean Society for Bioinformatics Conference
    • /
    • 2005.09a
    • /
    • pp.267-271
    • /
    • 2005
  • The goal of data mining is to extract new and useful knowledge from large scale datasets. As the amount of available data grows explosively, it became vitally important to develop faster data mining algorithms for various types of data. Recently, an interest in developing data mining algorithms that operate on graphs has been increased. Especially, mining frequent patterns from structured data such as graphs has been concerned by many research groups. A graph is a highly adaptable representation scheme that used in many domains including chemistry, bioinformatics and physics. For example, the chemical structure of a given substance can be modelled by an undirected labelled graph in which each node corresponds to an atom and each edge corresponds to a chemical bond between atoms. Internet can also be modelled as a directed graph in which each node corresponds to an web site and each edge corresponds to a hypertext link between web sites. Notably in bioinformatics area, various kinds of newly discovered data such as gene regulation networks or protein interaction networks could be modelled as graphs. There have been a number of attempts to find useful knowledge from these graph structured data. One of the most powerful analysis tool for graph structured data is frequent subgraph analysis. Recurring patterns in graph data can provide incomparable insights into that graph data. However, to find recurring subgraphs is extremely expensive in computational side. At the core of the problem, there are two computationally challenging problems. 1) Subgraph isomorphism and 2) Enumeration of subgraphs. Problems related to the former are subgraph isomorphism problem (Is graph A contains graph B?) and graph isomorphism problem(Are two graphs A and B the same or not?). Even these simplified versions of the subgraph mining problem are known to be NP-complete or Polymorphism-complete and no polynomial time algorithm has been existed so far. The later is also a difficult problem. We should generate all of 2$^n$ subgraphs if there is no constraint where n is the number of vertices of the input graph. In order to find frequent subgraphs from larger graph database, it is essential to give appropriate constraint to the subgraphs to find. Most of the current approaches are focus on the frequencies of a subgraph: the higher the frequency of a graph is, the more attentions should be given to that graph. Recently, several algorithms which use level by level approaches to find frequent subgraphs have been developed. Some of the recently emerging applications suggest that other constraints such as connectivity also could be useful in mining subgraphs : more strongly connected parts of a graph are more informative. If we restrict the set of subgraphs to mine to more strongly connected parts, its computational complexity could be decreased significantly. In this paper, we present an efficient algorithm to mine frequent subgraphs that are more strongly connected. Experimental study shows that the algorithm is scaling to larger graphs which have more than ten thousand vertices.

  • PDF

Customized Configuration with Template and Options (맞춤구성을 위한 템플릿과 Option 기반의 추론)

  • 이현정;이재규
    • Journal of Intelligence and Information Systems
    • /
    • v.8 no.1
    • /
    • pp.119-139
    • /
    • 2002
  • In electronic catalogs, each item is represented as an independent unit while the parts of the item can be composed of a higher level of functionality. Thus, the search for this kind of product database is limited to the retrieval of most similar standard commodities. However, many industrial products need to configure optional parts to fulfill the required specifications. Since there are many paths in finding the required specifications, we need to develop a search system via the configuration process. In this system, we adopt a two-phased approach. The first phase finds the most similar template, and the second phase adjusts the template specifications toward the required set of specifications by the Constraint and Rule Satisfaction Problem approach. There is no guarantee that the most similar template can find the most desirable configuration. The search system needs backtracking capability, so the search can stop at a satisfied local optimal satisfaction. This framework is applied to the configuration of computers and peripherals. Template-based reasoning is basically the same as case-based reasoning. The required set of specifications is represented by a list of criteria, and matched with the product specifications to find the closest ones. To measure the distance, we develop a thesaurus of values, which can identify the meaning of numbers, symbols, and words. With this configuration, the performance of the search by configuration algorithm is evaluated in terms of feasibility and admissibility.

  • PDF

A Systematic Review of Constraint Induced Movement Therapy about Upper Extremity in Stroke (뇌졸중 환자의 상지 강제유도운동치료에 관한 체계적 고찰)

  • Park, Su-Hyang;Baek, Soon-Hyung;Shin, Joong-il
    • The Journal of the Korea Contents Association
    • /
    • v.16 no.3
    • /
    • pp.149-161
    • /
    • 2016
  • The purpose of this study is provided to useful data to establish the Constraint Induced Movement Therapy(CIMT) in clinical plan to more specific for stroke patients. Also It is provided way for further study about CIMT. Methods used a systematic review. Systematic review is a research method that can be presented to the scientific evidence. Data were organized by PICO(Patient, Intervention, Comparison, Outcome). Research using the database Embase and Medline, It was searched for CIMT and Stroke. We selected for a total of 42 studies that meet the purpose of the present study. We was selected for a total of 42 studies that meet the purpose of the present study. Results was that the quality of the study is a systematic review, meta-analyzes, randomized controlled. CIMT studies was based on a high quality level of 50% of the total. The difference between the study period was 42.8%, more research was conducted prior to 2010. CIMT has been used more than mCIMT by to differ 40.5%. It is effective in over 75% of study, regardless of the CIMT intervention. In conclusion, CIMT has an effect on the upper limbs of stroke patients damaged, results will be used as a useful material to develop a CIMT in the clinical treatment plan. In future studies will need to validate studies on the effectiveness of the mCIMT, It will require a review of the effectiveness of validation studies.

Enhancing the performance of taxi application based on in-memory data grid technology (In-memory data grid 기술을 활용한 택시 애플리케이션 성능 향상 기법 연구)

  • Choi, Chi-Hwan;Kim, Jin-Hyuk;Park, Min-Kyu;Kwon, Kaaen;Jung, Seung-Hyun;Nazareno, Franco;Cho, Wan-Sup
    • Journal of the Korean Data and Information Science Society
    • /
    • v.26 no.5
    • /
    • pp.1035-1045
    • /
    • 2015
  • Recent studies in Big Data Analysis are showing promising results, utilizing the main memory for rapid data processing. In-memory computing technology can be highly advantageous when used with high-performing servers having tens of gigabytes of RAM with multi-core processors. The constraint in network in these infrastructure can be lessen by combining in-memory technology with distributed parallel processing. This paper discusses the research in the aforementioned concept applying to a test taxi hailing application without disregard to its underlying RDBMS structure. The application of IMDG technology in the application's backend API without restructuring the database schema yields 6 to 9 times increase in performance in data processing and throughput. Specifically, the change in throughput is very small even with increase in data load processing.

User Interaction-based Graph Query Formulation and Processing (사용자 상호작용에 기반한 그래프질의 생성 및 처리)

  • Jung, Sung-Jae;Kim, Taehong;Lee, Seungwoo;Lee, Hwasik;Jung, Hanmin
    • Journal of KIISE:Databases
    • /
    • v.41 no.4
    • /
    • pp.242-248
    • /
    • 2014
  • With the rapidly growing amount of information represented in RDF format, efficient querying of RDF graph has become a fundamental challenge. SPARQL is one of the most widely used query languages for retrieving information from RDF dataset. SPARQL is not only simple in its syntax but also powerful in representation of graph pattern queries. However, users need to make a lot of efforts to understand the ontology schema of a dataset in order to compose a relevant SPARQL query. In this paper, we propose a graph query formulation and processing scheme based on ontology schema information which can be obtained by summarizing RDF graph. In the context of the proposed querying scheme, a user can interactively formulate the graph queries on the graphic user interface without making efforts to understand the ontology schema and even without learning SPARQL syntax. The graph query formulated by a user is transformed into a set of class paths, which are stored in a relational database and used as the constraint for search space reduction when the relational database executes the graph search operation. By executing the LUBM query 2, 8, and 9 over LUBM (10,0), it is shown that the proposed querying scheme returns the complete result set.

Application of a Computerized Least-Cost Formulation in Processing an Emulsion-Type Sausage (유화형 소시지 제조시 컴퓨터를 이용한 최소가격배합프로그램의 적용)

  • Nam, Ki-Chang;Lee, Moo-Ha
    • Korean Journal of Food Science and Technology
    • /
    • v.25 no.5
    • /
    • pp.481-486
    • /
    • 1993
  • A computeized least-cost formulation program was applied to process emulsion-type sausages. The input data in formulation were utilized with the database which had been established in the previous study. The formulation results may provide Korean meat processors with actual examples. Meat-grade system made these examples more useful. The results of manufacturing test were as follows. The actual cohesiveness from manufactured sausages didn't correspond to the predicted values, but increased as the predicted values increased. These gabs caused by the different processing conditions between the model system and the actual processing. Hardness as well as cohesiveness could be used as the desirable index of a sausage texture. Comparing the cohesiveness and hardness of commercial frankfurters with those of test sausages, bind value constraint of $0.16{\sim}0.17$ in this test formula can be utilized for an actual formulation.

  • PDF