• Title/Summary/Keyword: Computational Approaches

Search Result 694, Processing Time 0.032 seconds

Finding Self-intersections of a Triangular Mesh by Using Visibility Maps (가시 정보를 이용한 삼각망의 꼬임 찾기)

  • Park S. C.
    • Korean Journal of Computational Design and Engineering
    • /
    • v.9 no.4
    • /
    • pp.382-386
    • /
    • 2004
  • This paper presents an algorithm for the triangular mesh intersection problem. The key aspect of the proposed algorithm is to reduce the number of triangle pairs to be checked for intersection. To this end, it employs two different approaches, the Y-group approach and the space partitioning approach. Even though both approaches have the same objective of reducing the number of triangular-triangular intersection (TTI) pairs, their inherent characteristics are quite different. While the V-group approach works by topology (reduces TTI pairs by guaranteeing no intersection among adjacent triangles), the space partitioning approach works by geometry (reduces TTI pairs by guaranteeing no intersection among distant triangles). The complementary nature of the two approaches brings substantial improvement in reducing the number TTI pairs.

Computational Approaches to Gene Prediction

  • Do Jin-Hwan;Choi Dong-Kug
    • Journal of Microbiology
    • /
    • v.44 no.2
    • /
    • pp.137-144
    • /
    • 2006
  • The problems associated with gene identification and the prediction of gene structure in DNA sequences have been the focus of increased attention over the past few years with the recent acquisition by large-scale sequencing projects of an immense amount of genome data. A variety of prediction programs have been developed in order to address these problems. This paper presents a review of the computational approaches and gene-finders used commonly for gene prediction in eukaryotic genomes. Two approaches, in general, have been adopted for this purpose: similarity-based and ab initio techniques. The information gleaned from these methods is then combined via a variety of algorithms, including Dynamic Programming (DP) or the Hidden Markov Model (HMM), and then used for gene prediction from the genomic sequences.

Evaluation of Displacement-based Approaches for a Shear Wall Structure (전단벽구조체에 대한 변위기반 내진성능법의 평가)

  • 최상현;현창헌;최강룡;김문수
    • Proceedings of the Computational Structural Engineering Institute Conference
    • /
    • 2003.10a
    • /
    • pp.465-472
    • /
    • 2003
  • In this paper, the displacement-based seismic design approaches are evaluated utilizing shaking-table test data of a 1:3 scaled reinforced concrete (RC) bearing wall structure Provided by IAEA. The maximum responses of the structure are estimated using the two prominent displacement-based approaches, i.e., the capacity spectrum method and the displacement coefficient method, and compared with the measured responses. For comparison purpose, linear and nonlinear time history analyses and response spectrum analysis are also performed. The results indicate that the capacity spectrum method underestimates the response of the structure In inelastic range while the displacement coefficient method yields reasonable values in general.

  • PDF

Inflow Conditions for Modelling the Neutral Equilibrium ABL Based on Standard k-ε Model

  • Jinghan Wang;Chao Li;Yiqing Xiao;Jinping ou
    • International Journal of High-Rise Buildings
    • /
    • v.11 no.4
    • /
    • pp.331-346
    • /
    • 2022
  • Reproducing the horizontally homogeneous atmospheric boundary layer in computational wind engineering is essential for predicting the wind loads on structures. One of the important issues is to use fully developed inflow conditions, which will lead to the consistence problem between inflow condition and internal roughness. Thus, by analyzing the previous results of computational fluid dynamic modeling turbulent horizontally homogeneous atmospheric boundary layer, we modify the past hypotheses, detailly derive a new type of inflow condition for standard k-ε turbulence model. A group of remedial approaches including formulation for wall shear stress and fixing the values of turbulent kinetic energy and turbulent dissipation rate in first wall adjacent layer cells, are also derived to realize the consistence of inflow condition and internal roughness. By combing the approaches with four different sets of inflow conditions, the well-maintained atmospheric boundary layer flow verifies the feasibility and capability of the proposed inflow conditions and remedial approaches.

NEW APPROACHES OF INVERSE SOFT ROUGH SETS AND THEIR APPLICATIONS IN A DECISION MAKING PROBLEM

  • DEMIRTAS, NAIME;HUSSAIN, SABIR;DALKILIC, ORHAN
    • Journal of applied mathematics & informatics
    • /
    • v.38 no.3_4
    • /
    • pp.335-349
    • /
    • 2020
  • We present inverse soft rough sets by using inverse soft sets and soft rough sets. We study different approaches for inverse soft rough set and examine the relationships between them. We also discuss and explore the basic properties for these approaches. Moreover we develop an algorithm following these concepts and apply it to a decision-making problem to demonstrate the applicability of the proposed methods.

OMICS approaches in cardiovascular diseases: a mini review

  • Sohag, Md. Mehadi Hasan;Raqib, Saleh Muhammed;Akhmad, Syaefudin Ali
    • Genomics & Informatics
    • /
    • v.19 no.2
    • /
    • pp.13.1-13.8
    • /
    • 2021
  • Ranked in the topmost position among the deadliest diseases in the world, cardiovascular diseases (CVDs) are a global burden with alterations in heart and blood vessels. Early diagnostics and prognostics could be the best possible solution in CVD management. OMICS (genomics, proteomics, transcriptomics, and metabolomics) approaches could be able to tackle the challenges against CVDs. Genome-wide association studies along with next-generation sequencing with various computational biology tools could lead a new sight in early detection and possible therapeutics of CVDs. Human cardiac proteins are also characterized by mass spectrophotometry which could open the scope of proteomics approaches in CVD. Besides this, regulation of gene expression by transcriptomics approaches exhibits a new insight while metabolomics is the endpoint on the downstream of multi-omics approaches to confront CVDs from the early onset. Although a lot of challenges needed to overcome in CVD management, OMICS approaches are certainly a new prospect.

The stick-slip decomposition method for modeling large-deformation Coulomb frictional contact

  • Amaireh, Layla. K.;Haikal, Ghadir
    • Coupled systems mechanics
    • /
    • v.7 no.5
    • /
    • pp.583-610
    • /
    • 2018
  • This paper discusses the issues associated with modeling frictional contact between solid bodies undergoing large deformations. The most common model for friction on contact interfaces in solid mechanics is the Coulomb friction model, in which two distinct responses are possible: stick and slip. Handling the transition between these two phases computationally has been a source of algorithmic instability, lack of convergence and non-unique solutions, particularly in the presence of large deformations. Most computational models for frictional contact have used penalty or updated Lagrangian approaches to enforce frictional contact conditions. These two approaches, however, present some computational challenges due to conditioning issues in penalty-type implementations and the iterative nature of the updated Lagrangian formulation, which, particularly in large simulations, may lead to relatively slow convergence. Alternatively, a plasticity-inspired implementation of frictional contact has been shown to handle the stick-slip conditions in a local, algorithmically efficient manner that substantially reduces computational cost and successfully avoids the issues of instability and lack of convergence often reported with other methods (Laursen and Simo 1993). The formulation of this approach, however, has been limited to the small deformations realm, a fact that severely limited its application to contact problems where large deformations are expected. In this paper, we present an algorithmically consistent formulation of this method that preserves its key advantages, while extending its application to the realm of large-deformation contact problems. We show that the method produces results similar to the augmented Lagrangian formulation at a reduced computational cost.

Development of Nonlinear Programming Approaches to Large Scale Linear Programming Problems (비선형계획법을 이용한 대규모 선형계획해법의 개발)

  • Chang, Soo-Y.
    • Journal of Korean Institute of Industrial Engineers
    • /
    • v.17 no.2
    • /
    • pp.131-142
    • /
    • 1991
  • The concept of criterion function is proposed as a framework for comparing the geometric and computational characteristics of various nonlinear programming approaches to linear programming such as the method of centers, Karmakar's algorithm and the gravitational method. Also, we discuss various computational issues involved in obtaining an efficient parallel implementation of these methods. Clearly, the most time consuming part in solving a linear programming problem is the direction finding procedure, where we obtain an improving direction. In most cases, finding an improving direction is equivalent to solving a simple optimization problem defined at the current feasible solution. Again, this simple optimization problem can be seen as a least squares problem, and the computational effort in solving the least squares problem is, in fact, same as the effort as in solving a system of linear equations. Hence, getting a solution to a system of linear equations fast is very important in solving a linear programming problem efficiently. For solving system of linear equations on parallel computing machines, an iterative method seems more adequate than direct methods. Therefore, we propose one possible strategy for getting an efficient parallel implementation of an iterative method for solving a system of equations and present the summary of computational experiment performed on transputer based parallel computing board installed on IBM PC.

  • PDF

Development of an uncertainty quantification approach with reduced computational cost for seismic fragility assessment of cable-stayed bridges

  • Akhoondzade-Noghabi, Vahid;Bargi, Khosrow
    • Earthquakes and Structures
    • /
    • v.23 no.4
    • /
    • pp.385-401
    • /
    • 2022
  • Uncertainty quantification is the most important challenge in seismic fragility assessment of structures. The precision increment of the quantification method leads to reliable results but at the same time increases the computational costs and the latter will be so undesirable in cases such as reliability-based design optimization which includes numerous probabilistic seismic analyses. Accordingly, the authors' effort has been put on the development and validation of an approach that has reduced computational cost in seismic fragility assessment. In this regard, it is necessary to apply the appropriate methods for consideration of two categories of uncertainties consisting of uncertainties related to the ground motions and structural characteristics, separately. Also, cable-stayed bridges have been specifically selected because as a result of their complexity and the according time-consuming seismic analyses, reducing the computations corresponding to their fragility analyses is worthy of studying. To achieve this, the fragility assessment of three case studies is performed based on existing and proposed approaches, and a comparative study on the efficiency in the estimation of seismic responses. For this purpose, statistical validation is conducted on the seismic demand and fragility resulting from the mentioned approaches, and through a comprehensive interpretation, sufficient arguments for the acceptable errors of the proposed approach are presented. Finally, this study concludes that the combination of the Capacity Spectrum Method (CSM) and Uniform Design Sampling (UDS) in advanced proposed forms can provide adequate accuracy in seismic fragility estimation at a significantly reduced computational cost.