• Title/Summary/Keyword: computation principle

Search Result 116, Processing Time 0.022 seconds

Thermodynamical bending analysis of P-FG sandwich plates resting on nonlinear visco-Pasternak's elastic foundations

  • Abdeldjebbar Tounsi;Adda Hadj Mostefa;Abdelmoumen Anis Bousahla;Abdelouahed Tounsi;Mofareh Hassan Ghazwani;Fouad Bourada;Abdelhakim Bouhadra
    • Steel and Composite Structures
    • /
    • v.49 no.3
    • /
    • pp.307-323
    • /
    • 2023
  • In this research, the study of the thermoelastic flexural analysis of silicon carbide/Aluminum graded (FG) sandwich 2D uniform structure (plate) under harmonic sinusoidal temperature load over time is presented. The plate is modeled using a simple two dimensional integral shear deformation plate theory. The current formulation contains an integral terms whose aim is to reduce a number of variables compared to others similar solutions and therefore minimize the computation time. The transverse shear stresses vary according to parabolic distribution and vanish at the free surfaces of the structure without any use of correction factors. The external load is applied on the upper face and varying in the thickness of the plates. The structure is supposed to be composed of "three layers" and resting on nonlinear visco-Pasternak's-foundations. The governing equations of the system are deduced and solved via Hamilton's principle and general solution. The computed results are compared with those existing in the literature to validate the current formulation. The impacts of the parameters (material index, temperature exponent, geometry ratio, time, top/bottom temperature ratio, elastic foundation type, and damping coefficient) on the dynamic flexural response are studied.

A Design of Multiplication Unit of Elementary Mathematics Textbook by Making the Best Use of Diversity of Algorithm (알고리즘의 다양성을 활용한 두 자리 수 곱셈의 지도 방안과 그에 따른 초등학교 3학년 학생의 곱셈 알고리즘 이해 과정 분석)

  • Kang, Heung-Kyu;Sim, Sun-Young
    • Journal of Elementary Mathematics Education in Korea
    • /
    • v.14 no.2
    • /
    • pp.287-314
    • /
    • 2010
  • The algorithm is a chain of mechanical procedures, capable of solving a problem. In modern mathematics educations, the teaching algorithm is performing an important role, even though contracted than in the past. The conspicuous characteristic of current elementary mathematics textbook's manner of manipulating multiplication algorithm is exceeding converge to 'standard algorithm.' But there are many algorithm other than standard algorithm in calculating multiplication, and this diversity is important with respect to didactical dimension. In this thesis, we have reconstructed the experimental learning and teaching plan of multiplication algorithm unit by making the best use of diversity of multiplication algorithm. It's core contents are as follows. Firstly, It handled various modified algorithms in addition to standard algorithm. Secondly, It did not order children to use standard algorithm exclusively, but encouraged children to select algorithm according to his interest. As stated above, we have performed teaching experiment which is ruled by new lesson design and analysed the effects of teaching experiment. Through this study, we obtained the following results and suggestions. Firstly, the experimental learning and teaching plan was effective on understanding of the place-value principle and the distributive law. The experimental group which was learned through various modified algorithm in addition to standard algorithm displayed higher degree of understanding than the control group. Secondly, as for computational ability, the experimental group did not show better achievement than the control group. It's cause is, in my guess, that we taught the children the various modified algorithm and allowed the children to select a algorithm by preference. The experimental group was more interested in diversity of algorithm and it's application itself than correct computation. Thirdly, the lattice method was not adopted in the majority of present mathematics school textbooks, but ranked high in the children's preference. I suggest that the mathematics school textbooks which will be developed henceforth should accept the lattice method.

  • PDF

Efficient Structure-Oriented Filter-Edge Preserving (SOF-EP) Method using the Corner Response (모서리 반응을 이용한 효과적인 Structure-Oriented Filter-Edge Preserving (SOF-EP) 기법)

  • Kim, Bona;Byun, Joongmoo;Seol, Soon Jee
    • Geophysics and Geophysical Exploration
    • /
    • v.20 no.3
    • /
    • pp.176-184
    • /
    • 2017
  • To interpret the seismic image precisely, random noises should be suppressed and the continuity of the image should be enhanced by using the appropriate smoothing techniques. Structure-Oriented Filter-Edge Preserving (SOF-EP) technique is one of the methods, that have been actively researched and used until now, to efficiently smooth seismic data while preserving the continuity of signal. This technique is based on the principle that diffusion occurs from large amplitude to small one. In a continuous structure such as a horizontal layer, diffusion or smoothing is operated along the layer, thereby increasing the continuity of layers and eliminating random noise. In addition, diffusion or smoothing across boundaries at discontinuous structures such as faults can be avoided by employing the continuity decision factor. Accordingly, the precision of the smoothing technique can be improved. However, in the case of the structure-oriented semblance technique, which has been used to calculate the continuity factor, it takes lots of time depending on the size of the filter and data. In this study, we first implemented the SOF-EP method and confirmed its effectiveness by applying it step by step to the field data. Next, we proposed and applied the corner response method which can efficiently calculate the continuity decision factor instead of structure-oriented semblance. As a result, we could confirm that the computation time can be reduced by about 6,000 times or more by applying the corner response method.

The Error and the Graphical Presentation form of the Binocular Vision Findings (양안시기능 검사 값의 오차와 그래프 양식)

  • Yoon, Seok-Hyun
    • Journal of Korean Ophthalmic Optics Society
    • /
    • v.12 no.3
    • /
    • pp.39-48
    • /
    • 2007
  • The stimulus of accommodation A, the stimulus of convergence C and the prism diopter ${\Delta}$ are reviewed and redefined more obviously. How the A and C are managed in the practice are reviewed and summarized. As a result, the common practical process of the binocular vision findings is most suitable in the case of the $l_c=26.67mm$, where the near distance is measured from the test lens to the near target and its value is 40 cm and the average of the P.D equal to 64 mm. The $l_c$ is the distance between the test lens and the center of rotation. Those values were used at calculating the various values in this paper. The error of the stimulus of accommodation values which are evaluated by the practically used formula (5) are calculated. Where the distance between lens and the principle point of eye is 15.07 mm ($=l_H$). The incremental stimulus of convergence values P' caused by the addition prism $P_m$ are evaluated by the recursion computation method. The P' are varied with the $P_m$, the distance $p_c$ between the prism and the center of rotation, the initial convergence value (or inverse target distance) $C_o$ and the refractive index n of the prism material. The recursion computation method and the other formulas are described in detail. In this paper n=1.7 is used. The two factors by which the P' is increased are exist. The one which is major is the property by which the values of convergence whose unit is ${\Delta}$ are not added in the generally way. The other is the that the actual power of the prism is varied with the angle of incidence light. And the P' is decreased remarkably by an increase in the $p_c$ and $C_o$. The $P^{\prime}/P_m$ are calculated and graphed which are varied with the $p_c$ and $C_o$, where the $P_m=20{\Delta}$, P.D=64 mm and n=1.7. The index n dependence of the $P^{\prime}/P_m$ is negligible (refer to fig. 6). The $p_c$ are evaluated at which the P' equal to the $P_m$ for various $P_m$ (refer to table 1). The actual values of the stimulus of convergence and accommodation which are manipulated simply in the practice are calculated. Two graphical forms are suggested. The one is like as the commonly used one. But the stimulus of convergence and of accommodation values in the practice are positioned at the exact positions when the graphic is made (refer to fig. 9). The other is the form that the incremental stimulus of convergence values caused by the addition prisms are represented at actual positions (refer to fig. 11).

  • PDF

A Comparative Study of Subset Construction Methods in OSEM Algorithms using Simulated Projection Data of Compton Camera (모사된 컴프턴 카메라 투사데이터의 재구성을 위한 OSEM 알고리즘의 부분집합 구성법 비교 연구)

  • Kim, Soo-Mee;Lee, Jae-Sung;Lee, Mi-No;Lee, Ju-Hahn;Kim, Joong-Hyun;Kim, Chan-Hyeong;Lee, Chun-Sik;Lee, Dong-Soo;Lee, Soo-Jin
    • Nuclear Medicine and Molecular Imaging
    • /
    • v.41 no.3
    • /
    • pp.234-240
    • /
    • 2007
  • Purpose: In this study we propose a block-iterative method for reconstructing Compton scattered data. This study shows that the well-known expectation maximization (EM) approach along with its accelerated version based on the ordered subsets principle can be applied to the problem of image reconstruction for Compton camera. This study also compares several methods of constructing subsets for optimal performance of our algorithms. Materials and Methods: Three reconstruction algorithms were implemented; simple backprojection (SBP), EM, and ordered subset EM (OSEM). For OSEM, the projection data were grouped into subsets in a predefined order. Three different schemes for choosing nonoverlapping subsets were considered; scatter angle-based subsets, detector position-based subsets, and both scatter angle- and detector position-based subsets. EM and OSEM with 16 subsets were performed with 64 and 4 iterations, respectively. The performance of each algorithm was evaluated in terms of computation time and normalized mean-squared error. Results: Both EM and OSEM clearly outperformed SBP in all aspects of accuracy. The OSEM with 16 subsets and 4 iterations, which is equivalent to the standard EM with 64 iterations, was approximately 14 times faster in computation time than the standard EM. In OSEM, all of the three schemes for choosing subsets yielded similar results in computation time as well as normalized mean-squared error. Conclusion: Our results show that the OSEM algorithm, which have proven useful in emission tomography, can also be applied to the problem of image reconstruction for Compton camera. With properly chosen subset construction methods and moderate numbers of subsets, our OSEM algorithm significantly improves the computational efficiency while keeping the original quality of the standard EM reconstruction. The OSEM algorithm with scatter angle- and detector position-based subsets is most available.

Ensemble Learning with Support Vector Machines for Bond Rating (회사채 신용등급 예측을 위한 SVM 앙상블학습)

  • Kim, Myoung-Jong
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.2
    • /
    • pp.29-45
    • /
    • 2012
  • Bond rating is regarded as an important event for measuring financial risk of companies and for determining the investment returns of investors. As a result, it has been a popular research topic for researchers to predict companies' credit ratings by applying statistical and machine learning techniques. The statistical techniques, including multiple regression, multiple discriminant analysis (MDA), logistic models (LOGIT), and probit analysis, have been traditionally used in bond rating. However, one major drawback is that it should be based on strict assumptions. Such strict assumptions include linearity, normality, independence among predictor variables and pre-existing functional forms relating the criterion variablesand the predictor variables. Those strict assumptions of traditional statistics have limited their application to the real world. Machine learning techniques also used in bond rating prediction models include decision trees (DT), neural networks (NN), and Support Vector Machine (SVM). Especially, SVM is recognized as a new and promising classification and regression analysis method. SVM learns a separating hyperplane that can maximize the margin between two categories. SVM is simple enough to be analyzed mathematical, and leads to high performance in practical applications. SVM implements the structuralrisk minimization principle and searches to minimize an upper bound of the generalization error. In addition, the solution of SVM may be a global optimum and thus, overfitting is unlikely to occur with SVM. In addition, SVM does not require too many data sample for training since it builds prediction models by only using some representative sample near the boundaries called support vectors. A number of experimental researches have indicated that SVM has been successfully applied in a variety of pattern recognition fields. However, there are three major drawbacks that can be potential causes for degrading SVM's performance. First, SVM is originally proposed for solving binary-class classification problems. Methods for combining SVMs for multi-class classification such as One-Against-One, One-Against-All have been proposed, but they do not improve the performance in multi-class classification problem as much as SVM for binary-class classification. Second, approximation algorithms (e.g. decomposition methods, sequential minimal optimization algorithm) could be used for effective multi-class computation to reduce computation time, but it could deteriorate classification performance. Third, the difficulty in multi-class prediction problems is in data imbalance problem that can occur when the number of instances in one class greatly outnumbers the number of instances in the other class. Such data sets often cause a default classifier to be built due to skewed boundary and thus the reduction in the classification accuracy of such a classifier. SVM ensemble learning is one of machine learning methods to cope with the above drawbacks. Ensemble learning is a method for improving the performance of classification and prediction algorithms. AdaBoost is one of the widely used ensemble learning techniques. It constructs a composite classifier by sequentially training classifiers while increasing weight on the misclassified observations through iterations. The observations that are incorrectly predicted by previous classifiers are chosen more often than examples that are correctly predicted. Thus Boosting attempts to produce new classifiers that are better able to predict examples for which the current ensemble's performance is poor. In this way, it can reinforce the training of the misclassified observations of the minority class. This paper proposes a multiclass Geometric Mean-based Boosting (MGM-Boost) to resolve multiclass prediction problem. Since MGM-Boost introduces the notion of geometric mean into AdaBoost, it can perform learning process considering the geometric mean-based accuracy and errors of multiclass. This study applies MGM-Boost to the real-world bond rating case for Korean companies to examine the feasibility of MGM-Boost. 10-fold cross validations for threetimes with different random seeds are performed in order to ensure that the comparison among three different classifiers does not happen by chance. For each of 10-fold cross validation, the entire data set is first partitioned into tenequal-sized sets, and then each set is in turn used as the test set while the classifier trains on the other nine sets. That is, cross-validated folds have been tested independently of each algorithm. Through these steps, we have obtained the results for classifiers on each of the 30 experiments. In the comparison of arithmetic mean-based prediction accuracy between individual classifiers, MGM-Boost (52.95%) shows higher prediction accuracy than both AdaBoost (51.69%) and SVM (49.47%). MGM-Boost (28.12%) also shows the higher prediction accuracy than AdaBoost (24.65%) and SVM (15.42%)in terms of geometric mean-based prediction accuracy. T-test is used to examine whether the performance of each classifiers for 30 folds is significantly different. The results indicate that performance of MGM-Boost is significantly different from AdaBoost and SVM classifiers at 1% level. These results mean that MGM-Boost can provide robust and stable solutions to multi-classproblems such as bond rating.