• Title/Summary/Keyword: Set Cover Optimization

Search Result 20, Processing Time 0.024 seconds

Genetic algorithm based optimum design of non-linear steel frames with semi-rigid connections

  • Hayalioglu, M.S.;Degertekin, S.O.
    • Steel and Composite Structures
    • /
    • v.4 no.6
    • /
    • pp.453-469
    • /
    • 2004
  • In this article, a genetic algorithm based optimum design method is presented for non-linear steel frames with semi-rigid connections. The design algorithm obtains the minimum weight frame by selecting suitable sections from a standard set of steel sections such as European wide flange beams (i.e., HE sections). A genetic algorithm is employed as optimization method which utilizes reproduction, crossover and mutation operators. Displacement and stress constraints of Turkish Building Code for Steel Structures (TS 648, 1980) are imposed on the frame. The algorithm requires a large number of non-linear analyses of frames. The analyses cover both the non-linear behaviour of beam-to-column connection and $P-{\Delta}$ effects of beam-column members. The Frye and Morris polynomial model is used for modelling of semi-rigid connections. Two design examples with various type of connections are presented to demonstrate the application of the algorithm. The semi-rigid connection modelling results in more economical solutions than rigid connection modelling, but it increases frame drift.

A Study for Optimal Dose Planning in Stereotactic Radiosurgery

  • Suh, Tae-suk
    • Progress in Medical Physics
    • /
    • v.1 no.1
    • /
    • pp.23-29
    • /
    • 1990
  • In order to explane the stereotactic procedure, the three steps of the procedure (target localization, dose planning, and radiation treatment) must be examined separately. The ultimate accuracy of the full procedure is dependent on each of these steps and on the consistancy of the approach The concern in this article was about dose planning, which is a important factor to the success of radiation treatment. The major factor in dose planning is a dosimetry system to evaluate the dose delivered to the target and normal tissues in the patient, while it generates an optimal dose distribution that will satisfy a set of clinical criteria for the patient. A three-dimensional treatment planning program is a prerequisite for treatment plan optimization. It must cover 3-D methods for representing the patient, the dose distributions, and beam settings. The major problems and possible modelings about 3-D factors and optimization technique were discussed to simplify and solve the problems associatied with 3-D optimization, with relative ease and efficiency. These modification can simplify the optimization problem while saving time, and can be used to develop reference dose planning system to prepare standard guideline for the selection of optimum beam parameters, such as the target position, collimator size, arc spacing, the variation in arc length and weight. The method yields good results which can then be simulated and tailored to the individual case. The procedure needed for dose planning in stereotactic radiosurgery is shown in figure 1.

  • PDF

Quasi-Static Structural Optimization Technique Using Equivalent Static Loads Calculated at Every Time Step as a Multiple Loading Condition (매 시간단계의 등가정하중을 다중하중조건으로 이용한 준정적 구조최적화 방법)

  • Choe, U-Seok;Park, Gyeong-Jin
    • Transactions of the Korean Society of Mechanical Engineers A
    • /
    • v.24 no.10 s.181
    • /
    • pp.2568-2580
    • /
    • 2000
  • This paper presents a quasi-static optimization technique for elastic structures under dynamic loads. An equivalent static load (ESL) set is defined as a static load set which generates the same displacement field as that from a dynamic load at a certain time. Multiple ESL sets calculated at every time step are employed to represent the various states of the structure under the dynamic load. They can cover every critical state that might happen at an arbitrary time. Continuous characteristics of dynamic load are simulated by multiple discontinuous ones of static loads. The calculated sets of ESLs are applied as a multiple loading condition in the optimization process. A design cycle is defined as a circulated process between an analysis domain and a design domain. Design cycles are repeated until a design converges. The analysis domain gives a loading condition necessary for the design domain. The design domain gives a new updated design to be verified by the analysis domain in the next design cycle. This iterative process is quite similar to that of the multidisciplinary optimization technique. Even though the global convergence cannot be guaranteed, the proposed technique makes it possible to optimize the structures under dynamic loads. It has also applicability, flexibility, and reliability.

A New Approach to the Minimization of Two-level Reed-Muller Circuits (이단계 Reed-Muller 회로의 최소화에 관한 새로운 접근)

  • 장준영;김귀상
    • Journal of the Korean Institute of Telematics and Electronics B
    • /
    • v.30B no.9
    • /
    • pp.1-8
    • /
    • 1993
  • In this paper, a new approach to the minimization of two-level Reed-Muller circuits is presented. In contrast to the previous method of using Xlinking operations to join two cubes for minimization. Cube selection method tries to select cubes one at a time until they cover the ON-set of the given function. A simple heuristic for selecting appropriate cubes is presented. In this heuristic, simply all cubes from the largest to the smallest are tried and whenever they decrease the number of remaining terms they are accepted. Since cubes once selected are not considered for a new selection, our method takes less time than other methods that need repetitive optimization process. The experimental results turned out to be improved in many cases compared to the best results in the literature.

  • PDF

Korean Professional Baseball League Scheduling (최적화 기법을 활용한 프로야구 일정 계획)

  • Gong, Gyeong-Su;Lee, Yeong-Ho
    • Proceedings of the Korean Operations and Management Science Society Conference
    • /
    • 2008.10a
    • /
    • pp.299-303
    • /
    • 2008
  • In this paper, we discuss the schedule problems for Korean Professional Baseball League (KPBL) and propose approaches to solve these problems by applying Integer Programming. A schedule in a sport league must satisfy lots of constraints on timing such as organizational, attractiveness, and fairness requirements. Organizational requirements cover a set of rules which have to guarantee that all the games can be scheduled according to the regulations imposed by Korean Baseball Organization (KBO). Attractiveness requirements focus on what stadium visitors, television spectators, and the clubs expect, that is, a varied, eventful, and exciting season. Fairness requirements have to guarantee that no team is handicapped or favored in comparison with the others. In addition to finding a feasible schedule that meets all the constraints, the problem addressed in this paper has the additional complexity of having the objective of minimizing the travel costs and every team has the balancing number of the games in home. We formalize the KPBL problem into an optimization problem and adopt the concept of evolution strategy to solve it. Using the method proposed, it is efficient to find better results than approaches developed before.

  • PDF

A Heuristic Polynomial Time Algorithm for Crew Scheduling Problem

  • Lee, Sang-Un
    • Journal of the Korea Society of Computer and Information
    • /
    • v.20 no.11
    • /
    • pp.69-75
    • /
    • 2015
  • This paper suggests heuristic polynomial time algorithm for crew scheduling problem that is a kind of optimization problems. This problem has been solved by linear programming, set cover problem, set partition problem, column generation, etc. But the optimal solution has not been obtained by these methods. This paper sorts transit costs $c_{ij}$ to ascending order, and the task i and j crew paths are merged in case of the sum of operation time ${\Sigma}o$ is less than day working time T. As a result, we can be obtain the minimum number of crews $_{min}K$ and minimum transit cost $z=_{min}c_{ij}$. For the transit cost of specific number of crews $K(K>_{min}K)$, we delete the maximum $c_{ij}$ as much as the number of $K-_{min}K$, and to partition a crew path. For the 5 benchmark data, this algorithm can be gets less transit cost than state-of-the-art algorithms, and gets the minimum number of crews.

Analysis of Reel Tape Packing process conditions using DOE (실험계획법을 이용한 Reel Tape Packaging 공정조건 분석)

  • Kim, Jae Kyung;Na, Seung Jun;Kwon, Jun Hwan;Jeon, Euy Sik
    • Journal of the Semiconductor & Display Technology
    • /
    • v.19 no.2
    • /
    • pp.105-109
    • /
    • 2020
  • Today's placement machines can pick and place thousands of components per hour with a very high degree of accuracy. The packaged semiconductor chips are inserted into a carrier at regular intervals, covered with a tape to protect the chips from external impact, and supplied in a roll form. These packaging processes also progress rapidly in a consistent direction, affecting the peelback strength between the cover tape and carrier depending on the main process conditions. In this paper, we analyzed the main process variables that affect peelback strength in the reel tape packaging process for packaging semiconductor chips. The main effects and interactions were analyzed. The peelback strength range required in the packaging process was set as the nominal the best characteristics, and the optimum process condition satisfying this was derived.

An Optimal Allocation Mechanism of Location Servers in A Linear Arrangement of Base Stations (선형배열 기지국을 위한 위치정보 서버의 최적할당 방식)

  • Lim, Kyung-Shik
    • The Transactions of the Korea Information Processing Society
    • /
    • v.7 no.2
    • /
    • pp.426-436
    • /
    • 2000
  • Given a linear arrangement of n base stations which generate multiple types of traffic among themselves, we consider the problem of finding a set of disjoint clusters to cover n base statons so that a cluster is assigned a location server. Our goal is to minimize the total communication cost for the entire network where the cost of intra-cluster communication is usually lower than that of intercluster communication for each type of traffic. The optimization problem is transformed into an equivavalent problem using the concept of relative cost, which generates the difference of communication costs between intracluster and intercluster communications. Using the relative cost matrix, an efficient algorithm of O($mm^2$), where m is the number of clusters in a partition, is designed by dynamic programming. The algorithm also finds all thevalid partitions in the same polynomial time, given the size constraint on a cluster, and the total allowable communication cost for the entire network.

  • PDF

Memory Organization for a Fuzzy Controller.

  • Jee, K.D.S.;Poluzzi, R.;Russo, B.
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 1993.06a
    • /
    • pp.1041-1043
    • /
    • 1993
  • Fuzzy logic based Control Theory has gained much interest in the industrial world, thanks to its ability to formalize and solve in a very natural way many problems that are very difficult to quantify at an analytical level. This paper shows a solution for treating membership function inside hardware circuits. The proposed hardware structure optimizes the memoried size by using particular form of the vectorial representation. The process of memorizing fuzzy sets, i.e. their membership function, has always been one of the more problematic issues for the hardware implementation, due to the quite large memory space that is needed. To simplify such an implementation, it is commonly [1,2,8,9,10,11] used to limit the membership functions either to those having triangular or trapezoidal shape, or pre-definite shape. These kinds of functions are able to cover a large spectrum of applications with a limited usage of memory, since they can be memorized by specifying very few parameters ( ight, base, critical points, etc.). This however results in a loss of computational power due to computation on the medium points. A solution to this problem is obtained by discretizing the universe of discourse U, i.e. by fixing a finite number of points and memorizing the value of the membership functions on such points [3,10,14,15]. Such a solution provides a satisfying computational speed, a very high precision of definitions and gives the users the opportunity to choose membership functions of any shape. However, a significant memory waste can as well be registered. It is indeed possible that for each of the given fuzzy sets many elements of the universe of discourse have a membership value equal to zero. It has also been noticed that almost in all cases common points among fuzzy sets, i.e. points with non null membership values are very few. More specifically, in many applications, for each element u of U, there exists at most three fuzzy sets for which the membership value is ot null [3,5,6,7,12,13]. Our proposal is based on such hypotheses. Moreover, we use a technique that even though it does not restrict the shapes of membership functions, it reduces strongly the computational time for the membership values and optimizes the function memorization. In figure 1 it is represented a term set whose characteristics are common for fuzzy controllers and to which we will refer in the following. The above term set has a universe of discourse with 128 elements (so to have a good resolution), 8 fuzzy sets that describe the term set, 32 levels of discretization for the membership values. Clearly, the number of bits necessary for the given specifications are 5 for 32 truth levels, 3 for 8 membership functions and 7 for 128 levels of resolution. The memory depth is given by the dimension of the universe of the discourse (128 in our case) and it will be represented by the memory rows. The length of a world of memory is defined by: Length = nem (dm(m)+dm(fm) Where: fm is the maximum number of non null values in every element of the universe of the discourse, dm(m) is the dimension of the values of the membership function m, dm(fm) is the dimension of the word to represent the index of the highest membership function. In our case then Length=24. The memory dimension is therefore 128*24 bits. If we had chosen to memorize all values of the membership functions we would have needed to memorize on each memory row the membership value of each element. Fuzzy sets word dimension is 8*5 bits. Therefore, the dimension of the memory would have been 128*40 bits. Coherently with our hypothesis, in fig. 1 each element of universe of the discourse has a non null membership value on at most three fuzzy sets. Focusing on the elements 32,64,96 of the universe of discourse, they will be memorized as follows: The computation of the rule weights is done by comparing those bits that represent the index of the membership function, with the word of the program memor . The output bus of the Program Memory (μCOD), is given as input a comparator (Combinatory Net). If the index is equal to the bus value then one of the non null weight derives from the rule and it is produced as output, otherwise the output is zero (fig. 2). It is clear, that the memory dimension of the antecedent is in this way reduced since only non null values are memorized. Moreover, the time performance of the system is equivalent to the performance of a system using vectorial memorization of all weights. The dimensioning of the word is influenced by some parameters of the input variable. The most important parameter is the maximum number membership functions (nfm) having a non null value in each element of the universe of discourse. From our study in the field of fuzzy system, we see that typically nfm 3 and there are at most 16 membership function. At any rate, such a value can be increased up to the physical dimensional limit of the antecedent memory. A less important role n the optimization process of the word dimension is played by the number of membership functions defined for each linguistic term. The table below shows the request word dimension as a function of such parameters and compares our proposed method with the method of vectorial memorization[10]. Summing up, the characteristics of our method are: Users are not restricted to membership functions with specific shapes. The number of the fuzzy sets and the resolution of the vertical axis have a very small influence in increasing memory space. Weight computations are done by combinatorial network and therefore the time performance of the system is equivalent to the one of the vectorial method. The number of non null membership values on any element of the universe of discourse is limited. Such a constraint is usually non very restrictive since many controllers obtain a good precision with only three non null weights. The method here briefly described has been adopted by our group in the design of an optimized version of the coprocessor described in [10].

  • PDF

Response for Lead Block Thickness of Parallel Plate Detector using Dielectric Film (유전체필름을 이용한 평행판검출기의 납 차폐물 두께변화에 대한 반응)

  • Kim Yong-Eun;Cho Moon-June;Kim Jun-Sang;Oh Young-Kee;Kim Jhin-Kee;Shin Kyo-Chul;Kim Jeung-Kee;Jeong Dong-Hyeok;Kim Ki-Hwan
    • Progress in Medical Physics
    • /
    • v.17 no.1
    • /
    • pp.1-5
    • /
    • 2006
  • A parallel plate detector containing PTFE films in FEP film for relative dosimetry was designed to measure the response of detectors to S and 10 MV X-rays from a medical linear accelerator through different thicknesses of lead. The dielectric materials were 100 m thick. The set-up conditions for measurements with this detector were as follows: SSD=100 cm the test detector was at a depth of 5 cm and the reference chamber was at a depth of 10 cm from the phantom surface for 6 and 10 MV X-rays. Lead blocks were designed to cover the irradiated field. They were added to the tray to increase thickness sequentially. We found that the detector response decreased exponentially with the thickness of lead added. The linear attenuation coefficients of the test detector and reference chamber were 0.1414 and 0.541, respectively, for 6 MV X-rays and 0.1358 and 0.5279 for 10 MV X-rays. The test detector response was greater than that of the reference chamber. The response function was calculated from the measured values of the test detector and reference chamber using optimization. These optimized constants for the detector response function were independent of theenergy. As a result of optimizing the response function between detectors, the use of a relative dosimeter was validated, because the response of the test detector was 1% for 6 MV X-rays and 4% for 10 MV X-rays.

  • PDF