• Title/Summary/Keyword: A level-set method

Search Result 1,382, Processing Time 0.033 seconds

The Efficient Algorithm for Simulating the Multiphase Flow

  • Yoon Seong Y;Yabe T.
    • Journal of computational fluids engineering
    • /
    • v.9 no.1
    • /
    • pp.18-24
    • /
    • 2004
  • The unified simulation for the multiphase flow by predictor-corrector scheme based on CIP method is introduced. In this algorithm, the interface between different phases is identified by a density function and tracked by solving an advection equation. Solid body motion is modeled by the translation and angular motion. The mathematical formulation and numerical results are also described. To verify the efficiency, accuracy and capability of proposed algorithm, two dimensional incompressible cavity flow, the motion of a floating ball into water and a single rising bubble by buoyancy force are numerically simulated by the present scheme. As results, it is confirmed that the present scheme gives an efficient, stable and reasonable solution in the multiphase flow problem.

Effect of missing values in detecting differentially expressed genes in a cDNA microarray experiment

  • Kim, Byung-Soo;Rha, Sun-Young
    • Bioinformatics and Biosystems
    • /
    • v.1 no.1
    • /
    • pp.67-72
    • /
    • 2006
  • The aim of this paper is to discuss the effect of missing values in detecting differentially expressed genes in a cDNA microarray experiment in the context of a one sample problem. We conducted a cDNA micro array experiment to detect differentially expressed genes for the metastasis of colorectal cancer based on twenty patients who underwent liver resection due to liver metastasis from colorectal cancer. Total RNAs from metastatic liver tumor and adjacent normal liver tissue from a single patient were labeled with cy5 and cy3, respectively, and competitively hybridized to a cDNA microarray with 7775 human genes. We used $M=log_2(R/G)$ for the signal evaluation, where Rand G denoted the fluorescent intensities of Cy5 and Cy3 dyes, respectively. The statistical problem comprises a one sample test of testing E(M)=0 for each gene and involves multiple tests. The twenty cDNA microarray data would comprise a matrix of dimension 7775 by 20, if there were no missing values. However, missing values occur for various reasons. For each gene, the no missing proportion (NMP) was defined to be the proportion of non-missing values out of twenty. In detecting differentially expressed (DE) genes, we used the genes whose NMP is greater than or equal to 0.4 and then sequentially increased NMP by 0.1 for investigating its effect on the detection of DE genes. For each fixed NMP, we imputed the missing values with K-nearest neighbor method (K=10) and applied the nonparametric t-test of Dudoit et al. (2002), SAM by Tusher et al. (2001) and empirical Bayes procedure by $L\ddot{o}nnstedt$ and Speed (2002) to find out the effect of missing values in the final outcome. These three procedures yielded substantially agreeable result in detecting DE genes. Of these three procedures we used SAM for exploring the acceptable NMP level. The result showed that the optimum no missing proportion (NMP) found in this data set turned out to be 80%. It is more desirable to find the optimum level of NMP for each data set by applying the method described in this note, when the plot of (NMP, Number of overlapping genes) shows a turning point.

  • PDF

Solving Mixed Strategy Equilibria of Multi-Player Games with a Transmission Congestion (다자게임 전력시장에서 송전선 혼잡시의 복합전략 내쉬균형 계산)

  • Lee, Kwang-Ho
    • The Transactions of the Korean Institute of Electrical Engineers A
    • /
    • v.55 no.11
    • /
    • pp.492-497
    • /
    • 2006
  • Nash Equilibrium (NE) is essential to investigate a participant's bidding strategy in a competitive electricity market. The transmission line constraints make it difficult to compute the NE due to causing a mixed strategy NE instead of a pure strategy NE. Computing a mixed strategy is more complicated in a multi-player game. The competition among multi-participants is modeled by a two-level hierarchical optimization problem. A mathematical programming approach is widely used in finding this equilibrium. However, there are difficulties to solving a mixed strategy NE. This paper presents two propositions to add heuristics to the mathematical programming method. The propositions are based on empirical studies on mixed strategies in numerous sample systems. Based on the propositions a new formulation is provided with a set of linear and nonlinear equations, and an algorithm is suggested for using the prepositions and the newly-formulated equations.

A Specification Technique for Product Line Core Assets using MDA / PIM (MDA / PIM을 이용한 제품계열 핵심자산의 명세 기법)

  • Min, Hyun-Gi;Han, Man-Jib;Kim, Soo-Dong
    • Journal of KIISE:Software and Applications
    • /
    • v.32 no.9
    • /
    • pp.835-846
    • /
    • 2005
  • A Product Line (PL) is a set of products (applications) that share common assets in a domain. Product Line Engineering (PLE) is a set of principles, techniques, mechanisms, and processes that enables the instantiation of produce lines. Core assets, the common assets, are created and instantiated to make products in PLE. Model Driven Architecture (MDA) is a new software development paradigm that emphasizes its feasibility with automatically developing product. Therefore, we can get advantages of both of the two paradigms, PLE and MDA, if core assets are represented as PIM in MDA with predefined automatic mechanism. PLE framework in the PIM level has to be interpreted by MDA tools. However, we do not have a standard UML profile for representing core assets. The research about representing PLE framework is not enough to make automatically core assets and products. We represent core asset in PIM level in terms of structural view and semantic view. We also suggest a method for representing architecture, component, workflow, algorithm, and decision model. The method of representing framework with PLE and MDA is used to improve productivity, applicability, maintainability and qualify of product.

Software Effort Estimation Using Artificial Intelligence Approaches (인공지능 접근방법에 의한 S/W 공수예측)

  • Jun, Eung-Sup
    • 한국IT서비스학회:학술대회논문집
    • /
    • 2003.11a
    • /
    • pp.616-623
    • /
    • 2003
  • Since the computing environment changes very rapidly, the estimation of software effort is very difficult because it is not easy to collect a sufficient number of relevant cases from the historical data. If we pinpoint the cases, the number of cases becomes too small. However if we adopt too many cases, the relevance declines. So in this paper we attempt to balance the number of cases and relevance. Since many researches on software effort estimation showed that the neural network models perform at least as well as the other approaches, so we selected the neural network model as the basic estimator. We propose a search method that finds the right level of relevant cases for the neural network model. For the selected case set, eliminating the qualitative input factors with the same values can reduce the scale of the neural network model. Since there exists a multitude of combinations of case sets, we need to search for the optimal reduced neural network model and corresponding case set. To find the quasi-optimal model from the hierarchy of reduced neural network models, we adopted the beam search technique and devised the Case-Set Selection Algorithm. This algorithm can be adopted in the case-adaptive software effort estimation systems.

  • PDF

Fuzzy Regression Analysis Using Fuzzy Neural Networks (퍼지 신경망에 의한 퍼지 회귀분석)

  • Kwon, Ki-Taek
    • Journal of Korean Institute of Industrial Engineers
    • /
    • v.23 no.2
    • /
    • pp.371-383
    • /
    • 1997
  • This paper propose a fuzzy regression method using fuzzy neural networks when a membership value is attached to each input-output pair. First, a method of linear fuzzy regression analysis is described by interpreting the reliability of each input-output pair as its membership values. Next, an architecture of fuzzy neural networks with fuzzy weights and fuzzy biases is shown. The fuzzy neural network maps a crisp input vector to a fuzzy output. A cost function is defined using the fuzzy output from the fuzzy neural network and the corresponding target output with a membership value. A learning algorithm is derived from the cost function. The derived learning algorithm trains the fuzzy neural network so that the level set of the fuzzy output includes the target output. Last, the proposed method is illustrated by computer simulations on numerical examples.

  • PDF

Calculated Damage of Italian Ryegrass in Abnormal Climate Based World Meteorological Organization Approach Using Machine Learning

  • Jae Seong Choi;Ji Yung Kim;Moonju Kim;Kyung Il Sung;Byong Wan Kim
    • Journal of The Korean Society of Grassland and Forage Science
    • /
    • v.43 no.3
    • /
    • pp.190-198
    • /
    • 2023
  • This study was conducted to calculate the damage of Italian ryegrass (IRG) by abnormal climate using machine learning and present the damage through the map. The IRG data collected 1,384. The climate data was collected from the Korea Meteorological Administration Meteorological data open portal.The machine learning model called xDeepFM was used to detect IRG damage. The damage was calculated using climate data from the Automated Synoptic Observing System (95 sites) by machine learning. The calculation of damage was the difference between the Dry matter yield (DMY)normal and DMYabnormal. The normal climate was set as the 40-year of climate data according to the year of IRG data (1986~2020). The level of abnormal climate was set as a multiple of the standard deviation applying the World Meteorological Organization (WMO) standard. The DMYnormal was ranged from 5,678 to 15,188 kg/ha. The damage of IRG differed according to region and level of abnormal climate with abnormal temperature, precipitation, and wind speed from -1,380 to 1,176, -3 to 2,465, and -830 to 962 kg/ha, respectively. The maximum damage was 1,176 kg/ha when the abnormal temperature was -2 level (+1.04℃), 2,465 kg/ha when the abnormal precipitation was all level and 962 kg/ha when the abnormal wind speed was -2 level (+1.60 ㎧). The damage calculated through the WMO method was presented as an map using QGIS. There was some blank area because there was no climate data. In order to calculate the damage of blank area, it would be possible to use the automatic weather system (AWS), which provides data from more sites than the automated synoptic observing system (ASOS).

A Study on the Method of Design of Drainage in Soft Clay (연약지반의 배수설계 기법에 관한 연구)

  • 지인택
    • Magazine of the Korean Society of Agricultural Engineers
    • /
    • v.39 no.3
    • /
    • pp.128-137
    • /
    • 1997
  • In this study, examined influence of consolidation effect that had affected by location of pump inlet that was set collection well for drainage of pore water discharged by embankment on soft ground through the field test. The results of this study are summarized as follows; 1 Initial consolidation curve value were larger than theoritic value, the cause of these phenomena were thought influence of secondary consolidation and three dimensional strain of soft clay. 2. The settlement value of Hosino method was larger than that of Hyperbolic method, but settlement value of Hyperbolic method was accurate more than that of Hosino method in the prediction of settlement. 3. When pump inlet in collection well came down from GL+O.3m to GL-1.5m, settlement value increased about 10cm and when the ground water level was made insitu after pumping had completed , settlement was expanded about 7~8cm. So it is found that location change of pump inlet bad an influence on settlement remarkably. 4. If location of pump inlet in collection well for large scale estate or wide road site is lowered than original ground level, the settlement will be accelerated effectively, and at this stage automatic pump must be used in pumping.

  • PDF

Energy Consumption and Exercise Effect of University Students During Automatic Stepper Exercise

  • MOON, Hwang-woon;CHOI, Youn-Jin
    • The Korean Journal of Food & Health Convergence
    • /
    • v.7 no.6
    • /
    • pp.17-24
    • /
    • 2021
  • This study investigates the exercise-physiological changes in stages through the movement of the automatic stepper and to analyze the usefulness of the automatic stepper. For 18 male university students, out of 10 levels, 5 level and 10 level of automatic stepper exercise were performed. At each 10, 20, 30 minutes during exercise, 5 and 10 minutes after exercise stop the subjects were examined to analyze the changes in energy consumption after minutes, respiratory exchange rate, heart rate, oxygen consumption per body weight, METs, cumulative energy consumption, and lactic acid to verify the usefulness of the automatic stepper. The mean and standard deviation were calculated using the SPSS, and one-way ANOVA with repeated measure was performed to verify the difference in the mean between time periods. The LSD method was used for the post-hoc test, and the significance level was set to α = .05. There were no significant changes in both 5 and 10 level, but the cumulative energy consumption over time increased significantly. In addition, as a low-intensity exercise intensity is shown, a low increase in lactic acid indicated a safe exercise level. In future studies, in-depth studies of various variables through regular exercise programs are needed for those who need safe exercise.

Comparison of Composite Methods of Satellite Chlorophyll-a Concentration Data in the East Sea

  • Park, Kyung-Ae;Park, Ji-Eun;Lee, Min-Sun;Kang, Chang-Keun
    • Korean Journal of Remote Sensing
    • /
    • v.28 no.6
    • /
    • pp.635-651
    • /
    • 2012
  • To produce a level-3 monthly composite image from daily level-2 Sea-viewing Wide Field-of-view Sensor (SeaWiFS) chlorophyll-a concentration data set in the East Sea, we applied four average methods such as the simple average method, the geometric mean method, the maximum likelihood average method, and the weighted averaging method. Prior to performing each averaging method, we classified all pixels into normal pixels and abnormal speckles with anomalously high chlorophyll-a concentrations to eliminate speckles from the following procedure for composite methods. As a result, all composite maps did not contain the erratic effect of speckles. The geometric mean method tended to underestimate chlorophyll-a concentration values all the time as compared with other methods. The weighted averaging method was quite similar to the simple average method, however, it had a tendency to be overestimated at high-value range of chlorophyll-a concentration. Maximum likelihood method was almost similar to the simple average method by demonstrating small variance and high correlation (r=0.9962) of the differences between the two. However, it still had the disadvantage that it was very sensitive in the presence of speckles within a bin. The geometric mean was most significantly deviated from the remaining methods regardless of the magnitude of chlorophyll-a concentration values. Its bias error tended to be large when the standard deviation within a bin increased with less uniformity. It was more biased when data uniformity became small. All the methods exhibited large errors as chlorophyll-a concentration values dominantly scatter in terms of time and space. This study emphasizes the importance of the speckle removal process and proper selection of average methods to reduce composite errors for diverse scientific applications of satellite-derived chlorophyll-a concentration data.