• Title/Summary/Keyword: Numerical approach

Search Result 4,052, Processing Time 0.027 seconds

Optimization of the Truss Structures Using Member Stress Approximate method (응력근사해법(應力近似解法)을 이용한 평면(平面)트러스구조물(構造物)의 형상최적화(形狀最適化)에 관한 연구(研究))

  • Lee, Gyu Won;You, Hee Jung
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.13 no.2
    • /
    • pp.73-84
    • /
    • 1993
  • In this research, configuration design optimization of plane truss structure has been tested by using decomposition technique. In the first level, the problem of transferring the nonlinear programming problem to linear programming problem has been effectively solved and the number of the structural analysis necessary for doing the sensitivity analysis can be decreased by developing stress constraint into member stress approximation according to the design space approach which has been proved to be efficient to the sensitivity analysis. And the weight function has been adopted as cost function in order to minimize structures. For the design constraint, allowable stress, buckling stress, displacement constraint under multi-condition and upper and lower constraints of the design variable are considered. In the second level, the nodal point coordinates of the truss structure are used as coordinating variable and the objective function has been taken as the weight function. By treating the nodal point coordinates as design variable, unconstrained optimal design problems are easy to solve. The decomposition method which optimize the section areas in the first level and optimize configuration variables in the second level was applied to the plane truss structures. The numerical comparisons with results which are obtained from numerical test for several truss structures with various shapes and any design criteria show that convergence rate is very fast regardless of constraint types and configuration of truss structures. And the optimal configuration of the truss structures obtained in this study is almost the identical one from other results. The total weight couldbe decreased by 5.4% - 15.4% when optimal configuration was accomplished, though there is some difference.

  • PDF

Analysis of the Effect of Corner Points and Image Resolution in a Mechanical Test Combining Digital Image Processing and Mesh-free Method (디지털 이미지 처리와 강형식 기반의 무요소법을 융합한 시험법의 모서리 점과 이미지 해상도의 영향 분석)

  • Junwon Park;Yeon-Suk Jeong;Young-Cheol Yoon
    • Journal of the Computational Structural Engineering Institute of Korea
    • /
    • v.37 no.1
    • /
    • pp.67-76
    • /
    • 2024
  • In this paper, we present a DIP-MLS testing method that combines digital image processing with a rigid body-based MLS differencing approach to measure mechanical variables and analyze the impact of target location and image resolution. This method assesses the displacement of the target attached to the sample through digital image processing and allocates this displacement to the node displacement of the MLS differencing method, which solely employs nodes to calculate mechanical variables such as stress and strain of the studied object. We propose an effective method to measure the displacement of the target's center of gravity using digital image processing. The calculation of mechanical variables through the MLS differencing method, incorporating image-based target displacement, facilitates easy computation of mechanical variables at arbitrary positions without constraints from meshes or grids. This is achieved by acquiring the accurate displacement history of the test specimen and utilizing the displacement of tracking points with low rigidity. The developed testing method was validated by comparing the measurement results of the sensor with those of the DIP-MLS testing method in a three-point bending test of a rubber beam. Additionally, numerical analysis results simulated only by the MLS differencing method were compared, confirming that the developed method accurately reproduces the actual test and shows good agreement with numerical analysis results before significant deformation. Furthermore, we analyzed the effects of boundary points by applying 46 tracking points, including corner points, to the DIP-MLS testing method. This was compared with using only the internal points of the target, determining the optimal image resolution for this testing method. Through this, we demonstrated that the developed method efficiently addresses the limitations of direct experiments or existing mesh-based simulations. It also suggests that digitalization of the experimental-simulation process is achievable to a considerable extent.

Development of A Network loading model for Dynamic traffic Assignment (동적 통행배정모형을 위한 교통류 부하모형의 개발)

  • 임강원
    • Journal of Korean Society of Transportation
    • /
    • v.20 no.3
    • /
    • pp.149-158
    • /
    • 2002
  • For the purpose of preciously describing real time traffic pattern in urban road network, dynamic network loading(DNL) models able to simulate traffic behavior are required. A number of different methods are available, including macroscopic, microscopic dynamic network models, as well as analytical model. Equivalency minimization problem and Variation inequality problem are the analytical models, which include explicit mathematical travel cost function for describing traffic behaviors on the network. While microscopic simulation models move vehicles according to behavioral car-following and cell-transmission. However, DNL models embedding such travel time function have some limitations ; analytical model has lacking of describing traffic characteristics such as relations between flow and speed, between speed and density Microscopic simulation models are the most detailed and realistic, but they are difficult to calibrate and may not be the most practical tools for large-scale networks. To cope with such problems, this paper develops a new DNL model appropriate for dynamic traffic assignment(DTA), The model is combined with vertical queue model representing vehicles as vertical queues at the end of links. In order to compare and to assess the model, we use a contrived example network. From the numerical results, we found that the DNL model presented in the paper were able to describe traffic characteristics with reasonable amount of computing time. The model also showed good relationship between travel time and traffic flow and expressed the feature of backward turn at near capacity.

Optimum Design of Soil Nailing Excavation Wall System Using Genetic Algorithm and Neural Network Theory (유전자 알고리즘 및 인공신경망 이론을 이용한 쏘일네일링 굴착벽체 시스템의 최적설계)

  • 김홍택;황정순;박성원;유한규
    • Journal of the Korean Geotechnical Society
    • /
    • v.15 no.4
    • /
    • pp.113-132
    • /
    • 1999
  • Recently in Korea, application of the soil nailing is gradually extended to the sites of excavations and slopes having various ground conditions and field characteristics. Design of the soil nailing is generally carried out in two steps, The First step is to examine the minimum safety factor against a sliding of the reinforced nailed-soil mass based on the limit equilibrium approach, and the second step is to check the maximum displacement expected to occur at facing using the numerical analysis technique. However, design parameters related to the soil nailing system are so various that a reliable design method considering interrelationships between these design parameters is continuously necessary. Additionally, taking into account the anisotropic characteristics of in-situ grounds, disturbances in collecting the soil samples and errors in measurements, a systematic analysis of the field measurement data as well as a rational technique of the optimum design is required to improve with respect to economical efficiency. As a part of these purposes, in the present study, a procedure for the optimum design of a soil nailing excavation wall system is proposed. Focusing on a minimization of the expenses in construction, the optimum design procedure is formulated based on the genetic algorithm. Neural network theory is further adopted in predicting the maximum horizontal displacement at a shotcrete facing. Using the proposed procedure, various effects of relevant design parameters are also analyzed. Finally, an optimized design section is compared with the existing design section at the excavation site being constructed, in order to verify a validity of the proposed procedure.

  • PDF

Study on the Neural Network for Handwritten Hangul Syllabic Character Recognition (수정된 Neocognitron을 사용한 필기체 한글인식)

  • 김은진;백종현
    • Korean Journal of Cognitive Science
    • /
    • v.3 no.1
    • /
    • pp.61-78
    • /
    • 1991
  • This paper descibes the study of application of a modified Neocognitron model with backward path for the recognition of Hangul(Korean) syllabic characters. In this original report, Fukushima demonstrated that Neocognitron can recognize hand written numerical characters of $19{\times}19$ size. This version accepts $61{\times}61$ images of handwritten Hangul syllabic characters or a part thereof with a mouse or with a scanner. It consists of an input layer and 3 pairs of Uc layers. The last Uc layer of this version, recognition layer, consists of 24 planes of $5{\times}5$ cells which tell us the identity of a grapheme receiving attention at one time and its relative position in the input layer respectively. It has been trained 10 simple vowel graphemes and 14 simple consonant graphemes and their spatial features. Some patterns which are not easily trained have been trained more extrensively. The trained nerwork which can classify indivisual graphemes with possible deformation, noise, size variance, transformation or retation wre then used to recongnize Korean syllabic characters using its selective attention mechanism for image segmentation task within a syllabic characters. On initial sample tests on input characters our model could recognize correctly up to 79%of the various test patterns of handwritten Korean syllabic charactes. The results of this study indeed show Neocognitron as a powerful model to reconginze deformed handwritten charavters with big size characters set via segmenting its input images as recognizable parts. The same approach may be applied to the recogition of chinese characters, which are much complex both in its structures and its graphemes. But processing time appears to be the bottleneck before it can be implemented. Special hardware such as neural chip appear to be an essestial prerquisite for the practical use of the model. Further work is required before enabling the model to recognize Korean syllabic characters consisting of complex vowels and complex consonants. Correct recognition of the neighboring area between two simple graphemes would become more critical for this task.

An Investigation of Reliability and Safety Factors in RC Flexural Members Designed by Current WSD Standard Code (현행(現行) 허용응력설계법(許容應力設計法)으로 설계(設計)되는 RC 휨부재(部材)의 신뢰성(信賴性)과 안전율(安全率) 고찰(考察))

  • Shin, Hyun Mook;Cho, Hyo Nam;Chung, Hwan Ho
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.1 no.1
    • /
    • pp.33-42
    • /
    • 1981
  • Current standard code for R.C. design consists of two conventional design parts, so called WSD and USD, which are based on ACI 318-63 and 318-71 code provisions. The safety factors of our WSD and USD design criteria which are taken primarily from ACI 318-63 code are considered to be not appropriate compared to out country's design and construction practices. Furthermore, even the ACI safety factors are not determined from probabilistic study but merely from experiences and practices. This study investigates the safety level of R.C. flexural members designed by the current WSD safety provisions based on Second Moment Reliability theory, and proposes a rational but efficient way of determining the nominal safety factors and the associated flexural allowable stresses of steel bars and concretes in order to provide a consistent level of target reliability. Cornell's Mean First-Order Second Moment Method formulae by a log normal transformation of resistance and load output variables are adopted as the reliability analysis method for this study. The compressive allowable stress formulae are derived by a unique approach in which the balanced steel ratios of the resulting design are chosen to be the corresponding under-reinforced sections designed by strength design method with an optimum reinforcing ratio. The target reliability index for the safety provisions are considered to be ${\beta}=4$ that is well suited for our level of construction and design practices. From a series of numerical applications to investigate the safety and reliability of R.C. flexural members designed by current WSD code, it has been found that the design based on WSD provision results in uneconomical design because of unusual and inconsistent reliability. A rational set of reliability based safety factors and allowable stress of steel bars and concrete for flexural members is proposed by providing the appropriate target reliability ${\beta}=4$.

  • PDF

Determination of cross section of composite breakwaters with multiple failure modes and system reliability analysis (다중 파괴모드에 의한 혼성제 케이슨의 단면 산정 및 제체에 대한 시스템 신뢰성 해석)

  • Lee, Cheol-Eung;Kim, Sang-Ug;Park, Dong-Heon
    • Journal of Korea Water Resources Association
    • /
    • v.51 no.9
    • /
    • pp.827-837
    • /
    • 2018
  • The stabilities of sliding and overturning of caisson and bearing capacity of mound against eccentric and inclined loads, which possibly happen to a composite caisson breakwaters, have been analyzed by using the technique of multiple failure modes. In deterministic approach, mathematical functions have been first derived from the ultimate limit state equations. Using those functions, the minimum cross section of caisson can straightforwardly be evaluated. By taking a look into some various deterministic analyses, it has been found that the conflict between failure modes can be occurred, such that the stability of bearing capacity of mound decreased as the stability of sliding increased. Therefore, the multiple failure modes for the composite caisson breakwaters should be taken into account simultaneously even in the process of deterministically evaluating the design cross section of caisson. Meanwhile, the reliability analyses on multiple failure modes have been implemented to the cross section determined by the sliding failure mode. It has been shown that the system failure probabilities of the composite breakwater are very behaved differently according to the variation of incident waves. The failure probabilities of system tend also to increase as the crest freeboards of caisson are heightening. The similar behaviors are taken place in cases that the water depths above mound are deepening. Finally, the results of the first-order modal are quite coincided with those of the second-order modal in all conditions of numerical tests performed in this paper. However, the second-order modal have had higher accuracy than the first-order modal. This is mainly due to that some correlations between failure modes can be properly incorporated in the second-order modal. Nevertheless, the first-order modal can also be easily used only when one of failure probabilities among multiple failure modes is extremely larger than others.

Retrieval of Hourly Aerosol Optical Depth Using Top-of-Atmosphere Reflectance from GOCI-II and Machine Learning over South Korea (GOCI-II 대기상한 반사도와 기계학습을 이용한 남한 지역 시간별 에어로졸 광학 두께 산출)

  • Seyoung Yang;Hyunyoung Choi;Jungho Im
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.5_3
    • /
    • pp.933-948
    • /
    • 2023
  • Atmospheric aerosols not only have adverse effects on human health but also exert direct and indirect impacts on the climate system. Consequently, it is imperative to comprehend the characteristics and spatiotemporal distribution of aerosols. Numerous research endeavors have been undertaken to monitor aerosols, predominantly through the retrieval of aerosol optical depth (AOD) via satellite-based observations. Nonetheless, this approach primarily relies on a look-up table-based inversion algorithm, characterized by computationally intensive operations and associated uncertainties. In this study, a novel high-resolution AOD direct retrieval algorithm, leveraging machine learning, was developed using top-of-atmosphere reflectance data derived from the Geostationary Ocean Color Imager-II (GOCI-II), in conjunction with their differences from the past 30-day minimum reflectance, and meteorological variables from numerical models. The Light Gradient Boosting Machine (LGBM) technique was harnessed, and the resultant estimates underwent rigorous validation encompassing random, temporal, and spatial N-fold cross-validation (CV) using ground-based observation data from Aerosol Robotic Network (AERONET) AOD. The three CV results consistently demonstrated robust performance, yielding R2=0.70-0.80, RMSE=0.08-0.09, and within the expected error (EE) of 75.2-85.1%. The Shapley Additive exPlanations(SHAP) analysis confirmed the substantial influence of reflectance-related variables on AOD estimation. A comprehensive examination of the spatiotemporal distribution of AOD in Seoul and Ulsan revealed that the developed LGBM model yielded results that are in close concordance with AERONET AOD over time, thereby confirming its suitability for AOD retrieval at high spatiotemporal resolution (i.e., hourly, 250 m). Furthermore, upon comparing data coverage, it was ascertained that the LGBM model enhanced data retrieval frequency by approximately 8.8% in comparison to the GOCI-II L2 AOD products, ameliorating issues associated with excessive masking over very illuminated surfaces that are often encountered in physics-based AOD retrieval processes.

Estimation of the Surface Currents using Mean Dynamic Topography and Satellite Altimeter Data in the East Sea (평균역학고도장과 인공위성고도계 자료를 이용한 동해 표층해류 추산)

  • Lee, Sang-Hyun;Byun, Do-Seong;Choi, Byoung-Ju;Lee, Eun-Il
    • The Sea:JOURNAL OF THE KOREAN SOCIETY OF OCEANOGRAPHY
    • /
    • v.14 no.4
    • /
    • pp.195-204
    • /
    • 2009
  • In order to estimate sea surface current fields in the East Sea, we examined characteristics of mean dynamic topography (MDT) fields (or mean surface current field, MSC) generated from three different methods. This preliminary investigation evaluates the accuracy of surface currents estimated from satellite-derived sea level anomaly (SLA) data and three MDT fields in the East Sea. AVISO (Archiving, Validation and Interpretation of Satellite Oceanographic data) provides a MDT field derived from satellite observation and numerical models with $0.25^{\circ}$ horizontal resolution. Steric height field relative to 500 dbar from temperature and salinity profiles in the East Sea supplies another MDT field. Trajectory data of surface drifters (ARGOS) in the East Sea for 14 years provide another MSC field. Absolute dynamic topography (ADT) field is calculated by adding SLA to each MDT. Application of geostrophic equation to three different ADT fields yields three surface geostrophic current fields. Comparisons were made between the estimated surface currents from the three different methods and in-situ current measurements from a ship-mounted ADCP (Acoustic Doppler Current Profiler) in the southwestern East Sea in 2005. For offshore areas more than 50 km away from the land, the correlation coefficients (R) between the estimated versus the measured currents range from 0.58 to 0.73, with 17.1 to $21.7\;cm\;s^{-1}$ root mean square deviation (RMSD). For coastal ocean within 50 km from the land, however, R ranges from 0.06 to 0.46 and RMSD ranges from 15.5 to $28.0\;cm\;s^{-1}$. Results from this study reveal that a new approach in producing MDT and SLA is required to improve the accuracy of surface current estimations for the shallow costal zones of the East Sea.

The Impact of Market Environments on Optimal Channel Strategy Involving an Internet Channel: A Game Theoretic Approach (시장 환경이 인터넷 경로를 포함한 다중 경로 관리에 미치는 영향에 관한 연구: 게임 이론적 접근방법)

  • Yoo, Weon-Sang
    • Journal of Distribution Research
    • /
    • v.16 no.2
    • /
    • pp.119-138
    • /
    • 2011
  • Internet commerce has been growing at a rapid pace for the last decade. Many firms try to reach wider consumer markets by adding the Internet channel to the existing traditional channels. Despite the various benefits of the Internet channel, a significant number of firms failed in managing the new type of channel. Previous studies could not cleary explain these conflicting results associated with the Internet channel. One of the major reasons is most of the previous studies conducted analyses under a specific market condition and claimed that as the impact of Internet channel introduction. Therefore, their results are strongly influenced by the specific market settings. However, firms face various market conditions in the real worlddensity and disutility of using the Internet. The purpose of this study is to investigate the impact of various market environments on a firm's optimal channel strategy by employing a flexible game theory model. We capture various market conditions with consumer density and disutility of using the Internet.

    shows the channel structures analyzed in this study. Before the Internet channel is introduced, a monopoly manufacturer sells its products through an independent physical store. From this structure, the manufacturer could introduce its own Internet channel (MI). The independent physical store could also introduce its own Internet channel and coordinate it with the existing physical store (RI). An independent Internet retailer such as Amazon could enter this market (II). In this case, two types of independent retailers compete with each other. In this model, consumers are uniformly distributed on the two dimensional space. Consumer heterogeneity is captured by a consumer's geographical location (ci) and his disutility of using the Internet channel (${\delta}_{N_i}$).
    shows various market conditions captured by the two consumer heterogeneities.
    (a) illustrates a market with symmetric consumer distributions. The model captures explicitly the asymmetric distributions of consumer disutility in a market as well. In a market like that is represented in
    (c), the average consumer disutility of using an Internet store is relatively smaller than that of using a physical store. For example, this case represents the market in which 1) the product is suitable for Internet transactions (e.g., books) or 2) the level of E-Commerce readiness is high such as in Denmark or Finland. On the other hand, the average consumer disutility when using an Internet store is relatively greater than that of using a physical store in a market like (b). Countries like Ukraine and Bulgaria, or the market for "experience goods" such as shoes, could be examples of this market condition. summarizes the various scenarios of consumer distributions analyzed in this study. The range for disutility of using the Internet (${\delta}_{N_i}$) is held constant, while the range of consumer distribution (${\chi}_i$) varies from -25 to 25, from -50 to 50, from -100 to 100, from -150 to 150, and from -200 to 200.
    summarizes the analysis results. As the average travel cost in a market decreases while the average disutility of Internet use remains the same, average retail price, total quantity sold, physical store profit, monopoly manufacturer profit, and thus, total channel profit increase. On the other hand, the quantity sold through the Internet and the profit of the Internet store decrease with a decreasing average travel cost relative to the average disutility of Internet use. We find that a channel that has an advantage over the other kind of channel serves a larger portion of the market. In a market with a high average travel cost, in which the Internet store has a relative advantage over the physical store, for example, the Internet store becomes a mass-retailer serving a larger portion of the market. This result implies that the Internet becomes a more significant distribution channel in those markets characterized by greater geographical dispersion of buyers, or as consumers become more proficient in Internet usage. The results indicate that the degree of price discrimination also varies depending on the distribution of consumer disutility in a market. The manufacturer in a market in which the average travel cost is higher than the average disutility of using the Internet has a stronger incentive for price discrimination than the manufacturer in a market where the average travel cost is relatively lower. We also find that the manufacturer has a stronger incentive to maintain a high price level when the average travel cost in a market is relatively low. Additionally, the retail competition effect due to Internet channel introduction strengthens as average travel cost in a market decreases. This result indicates that a manufacturer's channel power relative to that of the independent physical retailer becomes stronger with a decreasing average travel cost. This implication is counter-intuitive, because it is widely believed that the negative impact of Internet channel introduction on a competing physical retailer is more significant in a market like Russia, where consumers are more geographically dispersed, than in a market like Hong Kong, that has a condensed geographic distribution of consumers.
    illustrates how this happens. When mangers consider the overall impact of the Internet channel, however, they should consider not only channel power, but also sales volume. When both are considered, the introduction of the Internet channel is revealed as more harmful to a physical retailer in Russia than one in Hong Kong, because the sales volume decrease for a physical store due to Internet channel competition is much greater in Russia than in Hong Kong. The results show that manufacturer is always better off with any type of Internet store introduction. The independent physical store benefits from opening its own Internet store when the average travel cost is higher relative to the disutility of using the Internet. Under an opposite market condition, however, the independent physical retailer could be worse off when it opens its own Internet outlet and coordinates both outlets (RI). This is because the low average travel cost significantly reduces the channel power of the independent physical retailer, further aggravating the already weak channel power caused by myopic inter-channel price coordination. The results implies that channel members and policy makers should explicitly consider the factors determining the relative distributions of both kinds of consumer disutility, when they make a channel decision involving an Internet channel. These factors include the suitability of a product for Internet shopping, the level of E-Commerce readiness of a market, and the degree of geographic dispersion of consumers in a market. Despite the academic contributions and managerial implications, this study is limited in the following ways. First, a series of numerical analyses were conducted to derive equilibrium solutions due to the complex forms of demand functions. In the process, we set up V=100, ${\lambda}$=1, and ${\beta}$=0.01. Future research may change this parameter value set to check the generalizability of this study. Second, the five different scenarios for market conditions were analyzed. Future research could try different sets of parameter ranges. Finally, the model setting allows only one monopoly manufacturer in the market. Accommodating competing multiple manufacturers (brands) would generate more realistic results.

  • PDF

  • (34141) Korea Institute of Science and Technology Information, 245, Daehak-ro, Yuseong-gu, Daejeon
    Copyright (C) KISTI. All Rights Reserved.