• Title/Summary/Keyword: Section analysis method

Search Result 2,260, Processing Time 0.03 seconds

A Study on the Construction method of Stamped earthen wall (판축토성(版築土城) 축조기법(築造技法)의 이해(理解) - 풍납토성(風納土城) 축조기술(築造技術)을 중심(中心)으로 -)

  • Shin, Hee-kweon
    • Korean Journal of Heritage: History & Science
    • /
    • v.47 no.1
    • /
    • pp.102-115
    • /
    • 2014
  • The stamped earth method is a typical ancient engineering technique which consists of in-filling wooden frame with layers of stamped earth or sand. This method has been universally used to construct earthen walls and buildings, etc. The purpose of this article is to understand the construction method and principles of the stamped earthen wall through analysis of various construction techniques of Pungnaptoseong Fortress(Earthen Fortification in Pungnap-dong). First of all, the ground was leveled and the foundations for the construction of the earthen wall were laid. The underground foundation of the earthen walls was usually constructed by digging into the ground and then in-filling this space with layers of mud clay. Occasionally wooden posts or paving stones which may have been used to reinforce the soft ground were driven in. The method of adding layers of stamped earth at an oblique angle to either side of a central wall is the most characteristic feature of Pungnaptoseong Fortress. Even though the traces of fixing posts, boards, and the hardening of earth - all signatures of the stamped earth technique - have not been identified, evidence of a wooden frame has been found. It has also been observed that this section was constructed by including layers of mud clay and organic remains such as leaves and twigs in order to strengthen the adhesiveness of the structures. The outer part of the central wall was constructed by the anti-slope stamped earth technique to protect central wall. In addition a final layer of paved stones was added to the upper part of the wall. These stone layers and the stone wall were constructed in order to prevent the loss of the earthen wall and to discharge and drain water. Meanwhile, the technique of cementing with fire was used to control damp and remove water in stamped earth. It can not be said at present that the stamped earth method has been confirmed as the typical construction method of Korean ancient earthen walls. If we make a comparative study of the evidence of the stamped earth technique at Pungnaptoseong Fortress with other archeological sites, progress will be made in the investigation of the construction method and principles of stamped earthen wall.

Influence of the Existing Cavern on the Stability of Adjacent Tunnel Excavation by Small-Scale Model Tests (축소모형시험을 통한 공동이 근접터널 굴착에 미치는 영향평가)

  • Jung, Minchul;Hwang, Jungsoon;Kim, Jongseob;Kim, Seungwook;Baek, Seungcheol
    • Journal of the Korean GEO-environmental Society
    • /
    • v.15 no.12
    • /
    • pp.117-128
    • /
    • 2014
  • Generally, when constructing a tunnel close to existing structures, the tunnel must be built at a constant distance from the structures that is more than width of tunnel to minimize the impact of interference between an existing structures and new tunnel. Spacing of these closed tunnels should be designed considering soil state, size of tunnel and reinforcement method. Particularly when the ground is soft, a care should be taken with the tunnel plans because the closer the tunnel is to the existing structures, the greater the deformation becomes. As methods of reviewing the effect of cavities on the stability of a tunnel, field measurement, numerical analysis and scaled model test can be considered. In the methods, the scaled model test can reproduce the engineering characteristics of a rock in a field condition and the shape of structures using the scale factor even not all conditions cannot be considered. In this study, when construction of a tunnel close to existing structures, the method and considering factors of the scaled model test were studied to predict the actual tunnel behavior in planning stage. Furthermore, model test results were compared with the numerical analysis results for verifying the proposed model test procedure. Also, practical results were derived to verify the stability of a tunnel vis-a-vis cavities through the scaled model test, which assumed spacing distances of 0.25 D, 0.50 D, and 1.00 D between the cavities and tunnel as well as the network state distribution. The spacing distances of 1.0 D is evaluated as the critical distance by the results of model test and numerical analysis.

Investigating Dynamic Mutation Process of Issues Using Unstructured Text Analysis (비정형 텍스트 분석을 활용한 이슈의 동적 변이과정 고찰)

  • Lim, Myungsu;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.1
    • /
    • pp.1-18
    • /
    • 2016
  • Owing to the extensive use of Web media and the development of the IT industry, a large amount of data has been generated, shared, and stored. Nowadays, various types of unstructured data such as image, sound, video, and text are distributed through Web media. Therefore, many attempts have been made in recent years to discover new value through an analysis of these unstructured data. Among these types of unstructured data, text is recognized as the most representative method for users to express and share their opinions on the Web. In this sense, demand for obtaining new insights through text analysis is steadily increasing. Accordingly, text mining is increasingly being used for different purposes in various fields. In particular, issue tracking is being widely studied not only in the academic world but also in industries because it can be used to extract various issues from text such as news, (SocialNetworkServices) to analyze the trends of these issues. Conventionally, issue tracking is used to identify major issues sustained over a long period of time through topic modeling and to analyze the detailed distribution of documents involved in each issue. However, because conventional issue tracking assumes that the content composing each issue does not change throughout the entire tracking period, it cannot represent the dynamic mutation process of detailed issues that can be created, merged, divided, and deleted between these periods. Moreover, because only keywords that appear consistently throughout the entire period can be derived as issue keywords, concrete issue keywords such as "nuclear test" and "separated families" may be concealed by more general issue keywords such as "North Korea" in an analysis over a long period of time. This implies that many meaningful but short-lived issues cannot be discovered by conventional issue tracking. Note that detailed keywords are preferable to general keywords because the former can be clues for providing actionable strategies. To overcome these limitations, we performed an independent analysis on the documents of each detailed period. We generated an issue flow diagram based on the similarity of each issue between two consecutive periods. The issue transition pattern among categories was analyzed by using the category information of each document. In this study, we then applied the proposed methodology to a real case of 53,739 news articles. We derived an issue flow diagram from the articles. We then proposed the following useful application scenarios for the issue flow diagram presented in the experiment section. First, we can identify an issue that actively appears during a certain period and promptly disappears in the next period. Second, the preceding and following issues of a particular issue can be easily discovered from the issue flow diagram. This implies that our methodology can be used to discover the association between inter-period issues. Finally, an interesting pattern of one-way and two-way transitions was discovered by analyzing the transition patterns of issues through category analysis. Thus, we discovered that a pair of mutually similar categories induces two-way transitions. In contrast, one-way transitions can be recognized as an indicator that issues in a certain category tend to be influenced by other issues in another category. For practical application of the proposed methodology, high-quality word and stop word dictionaries need to be constructed. In addition, not only the number of documents but also additional meta-information such as the read counts, written time, and comments of documents should be analyzed. A rigorous performance evaluation or validation of the proposed methodology should be performed in future works.

Analysis on the Influence of Moment Distribution Shape on the Effective Moment of Inertia of Simply Supported Reinforced Concrete Beams (철근콘크리트 단순보의 유효 단면2차모멘트에 대한 모멘트 분포 형상의 영향 분석)

  • Park, Mi-Young;Kim, Sang-Sik;Lee, Seung-Bae;Kim, Chang-Hyuk;Kim, Kang-Su
    • Journal of the Korea Concrete Institute
    • /
    • v.21 no.1
    • /
    • pp.93-103
    • /
    • 2009
  • The concept of the effective moment of inertia has been generally used for the deflection estimation of reinforced concrete flexural members. The KCI design code adopted Branson's equation for simple calculation of deflection, in which a representative value of the effective moment of inertia is used for the whole length of a member. However, the code equation for the effective moment of inertia was formulated based on the results of beam tests subjected to uniformly distributed loads, which may not effectively account for those of members under different loading conditions. Therefore, this study aimed to verify the influences of moment shapes resulting from different loading patterns by experiments. Six beams were fabricated and tested in this study, where primary variables were concrete compressive strengths and loading distances from supports, and test results were compared to the code equation and other existing approaches. A method utilizing variational analysis for the deflection estimation has been also proposed, which accounts for the influences of moment shapes to the effective moment of inertia. The test results indicated that the effective moment of inertia was somewhat influenced by the moment shape, and that this influence of moment shape to the effective moment of inertia was not captured by the code equation. Compared to the code equation, the proposed method had smaller variation in the ratios of the test results to the estimated values of beam deflections. Therefore, the proposed method is considered to be a good approach to take into account the influence of moment shape for the estimation of beam deflection, however, the differences between test results and estimated deflections show that more researches are still required to improve its accuracy by modifying the shape function of deflection.

Incremental Ensemble Learning for The Combination of Multiple Models of Locally Weighted Regression Using Genetic Algorithm (유전 알고리즘을 이용한 국소가중회귀의 다중모델 결합을 위한 점진적 앙상블 학습)

  • Kim, Sang Hun;Chung, Byung Hee;Lee, Gun Ho
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.7 no.9
    • /
    • pp.351-360
    • /
    • 2018
  • The LWR (Locally Weighted Regression) model, which is traditionally a lazy learning model, is designed to obtain the solution of the prediction according to the input variable, the query point, and it is a kind of the regression equation in the short interval obtained as a result of the learning that gives a higher weight value closer to the query point. We study on an incremental ensemble learning approach for LWR, a form of lazy learning and memory-based learning. The proposed incremental ensemble learning method of LWR is to sequentially generate and integrate LWR models over time using a genetic algorithm to obtain a solution of a specific query point. The weaknesses of existing LWR models are that multiple LWR models can be generated based on the indicator function and data sample selection, and the quality of the predictions can also vary depending on this model. However, no research has been conducted to solve the problem of selection or combination of multiple LWR models. In this study, after generating the initial LWR model according to the indicator function and the sample data set, we iterate evolution learning process to obtain the proper indicator function and assess the LWR models applied to the other sample data sets to overcome the data set bias. We adopt Eager learning method to generate and store LWR model gradually when data is generated for all sections. In order to obtain a prediction solution at a specific point in time, an LWR model is generated based on newly generated data within a predetermined interval and then combined with existing LWR models in a section using a genetic algorithm. The proposed method shows better results than the method of selecting multiple LWR models using the simple average method. The results of this study are compared with the predicted results using multiple regression analysis by applying the real data such as the amount of traffic per hour in a specific area and hourly sales of a resting place of the highway, etc.

Accuracy Analysis of ADCP Stationary Discharge Measurement for Unmeasured Regions (ADCP 정지법 측정 시 미계측 영역의 유량 산정 정확도 분석)

  • Kim, Jongmin;Kim, Seojun;Son, Geunsoo;Kim, Dongsu
    • Journal of Korea Water Resources Association
    • /
    • v.48 no.7
    • /
    • pp.553-566
    • /
    • 2015
  • Acoustic Doppler Current Profilers(ADCPs) have capability to concurrently capitalize three-dimensional velocity vector and bathymetry with highly efficient and rapid manner, and thereby enabling ADCPs to document the hydrodynamic and morphologic data in very high spatial and temporal resolution better than other contemporary instruments. However, ADCPs are also limited in terms of the inevitable unmeasured regions near bottom, surface, and edges of a given cross-section. The velocity in those unmeasured regions are usually extrapolated or assumed for calculating flow discharge, which definitely affects the accuracy in the discharge assessment. This study aimed at scrutinizing a conventional extrapolation method(i.e., the 1/6 power law) for estimating the unmeasured regions to figure out the accuracy in ADCP discharge measurements. For the comparative analysis, we collected spatially dense velocity data using ADV as well as stationary ADCP in a real-scale straight river channel, and applied the 1/6 power law for testing its applicability in conjunction with the logarithmic law which is another representative velocity law. As results, the logarithmic law fitted better with actual velocity measurement than the 1/6 power law. In particular, the 1/6 power law showed a tendency to underestimate the velocity in the near surface region and overestimate in the near bottom region. This finding indicated that the 1/6 power law could be unsatisfactory to follow actual flow regime, thus that resulted discharge estimates in both unmeasured top and bottom region can give rise to discharge bias. Therefore, the logarithmic law should be considered as an alternative especially for the stationary ADCP discharge measurement. In addition, it was found that ADCP should be operated in at least more than 0.6 m of water depth in the left and right edges for better estimate edge discharges. In the future, similar comparative analysis might be required for the moving boat ADCP discharge measurement method, which has been more widely used in the field.

Optimization of the Truss Structures Using Member Stress Approximate method (응력근사해법(應力近似解法)을 이용한 평면(平面)트러스구조물(構造物)의 형상최적화(形狀最適化)에 관한 연구(研究))

  • Lee, Gyu Won;You, Hee Jung
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.13 no.2
    • /
    • pp.73-84
    • /
    • 1993
  • In this research, configuration design optimization of plane truss structure has been tested by using decomposition technique. In the first level, the problem of transferring the nonlinear programming problem to linear programming problem has been effectively solved and the number of the structural analysis necessary for doing the sensitivity analysis can be decreased by developing stress constraint into member stress approximation according to the design space approach which has been proved to be efficient to the sensitivity analysis. And the weight function has been adopted as cost function in order to minimize structures. For the design constraint, allowable stress, buckling stress, displacement constraint under multi-condition and upper and lower constraints of the design variable are considered. In the second level, the nodal point coordinates of the truss structure are used as coordinating variable and the objective function has been taken as the weight function. By treating the nodal point coordinates as design variable, unconstrained optimal design problems are easy to solve. The decomposition method which optimize the section areas in the first level and optimize configuration variables in the second level was applied to the plane truss structures. The numerical comparisons with results which are obtained from numerical test for several truss structures with various shapes and any design criteria show that convergence rate is very fast regardless of constraint types and configuration of truss structures. And the optimal configuration of the truss structures obtained in this study is almost the identical one from other results. The total weight couldbe decreased by 5.4% - 15.4% when optimal configuration was accomplished, though there is some difference.

  • PDF

Immunohistochemical Detection of Lymph Nodes Micrometastases in Patients of Pathologic Stage I Non-small-cell Lung Cancer (병리적 병기 1기의 비소세포폐암 환자에서 면역조직화학염색에 의한 림프절 미세전이 관찰)

  • Ryu, Jeong-Seon;Han, Hye-Seung;Kim, Min-Ji;Kwak, Seung-Min;Cho, Jae-Hwa;Yoon, Yong-Han;Lee, Hong-Lyeol;Chu, Young-Chae;Kim, Kwang-Ho
    • Tuberculosis and Respiratory Diseases
    • /
    • v.57 no.4
    • /
    • pp.345-350
    • /
    • 2004
  • Background : To evaluate the frequency and clinical significance of lymph node micrometastasis in patients of non-small-cell lung cancer pathologically staged to be T1-2,N0. Method : From consecutive 29 patients of non-small-cell lung cancer who received curative operation and routine systemic nodal dissection, we immunohistochemically examined 806 lymph nodes from mediastinal, hilar and peribronchial lesion. All slides were stained with hematoxylin and eosin staining for one section and with cytokeratin AE1/AE3 antibody for another consecutive section of same lymph node to find out micrometastasis. Results : In 806 lymph nodes examined, no tumor cell was seen on hematoxylin and eosin staining and micrometastic foci were shown to be on 0.37%(3) of 806 lymph nodes, in which were upper paratracheal, interlobar and peribronchial lymph node. These three positive stains constitute 10.3%(3) of the 29 patients with non-small-cell lung cancer. Nine patients died from disease progression(4), postoperative complication(3) and concomitant diseases(2). The four patients with disease progression did not show evidence of micrometastasis on their lymph node examination. Conclusion : The frequency of lymph node micrometastasis was in 0.37% of 806 lymph nodes examined. The study results might suggested that routine analysis of micrometastasis on the lymph node didn't give any clinical implication on patients with non-small-cell lung cancer.

Analysis of 2009 Revised Chemistry I Textbooks Based on STEAM Aspect (STEAM 관점에서 2009 개정 화학 I 교과서 분석)

  • Bok, Juri;Jang, Nak Han
    • Journal of Science Education
    • /
    • v.36 no.2
    • /
    • pp.381-393
    • /
    • 2012
  • This study was analyzed that what kind of elements for STEAM, except scientific commonsense, are contained in 2009 revised chemistry textbooks I for high school students. So first, elements of STEAM in textbooks were examined by following three sections; by publishing company, each unit and area of textbook. For reference, new sub-elements of STEAM were set because existing elements of STEAM is incongruent with current textbooks. As a result, most chemistry textbooks included elements of STEAM properly for inter-related learning with the other fields. Every textbook had its unique learning methods for utilizing elements of STEAM and they were unified as one way. Depending on textbooks, learning methods were little bit different from the others. Also, detailed elements of STEAM contained in textbooks were classified just 14 types. And they were even focused on a few elements according to sort of textbook. Thus, it seemed that there was a certain limitation of current education of STEAM in chemistry Field. By the unit, according to the curriculum, contained elements of STEAM were different. Almost all elements of STEAM were located in I section. Consequently, it is difficult to include elements of STEAM if mathematics or history were not existed in curriculum. Lastly, by the area, most of all elements of STEAM were included in reference section. Almost all elements of STEAM were focused on art and culture. Thus, STEAM was used for utilization about chemical knowledge in substance. Otherwise, convergence training for approach method was not enough in chemical knowledge.

  • PDF

Two Dimensional Size Effect on the Compressive Strength of Composite Plates Considering Influence of an Anti-buckling Device (좌굴방지장치 영향을 고려한 복합재 적층판의 압축강도에 대한 이차원 크기 효과)

  • ;;C. Soutis
    • Composites Research
    • /
    • v.15 no.4
    • /
    • pp.23-31
    • /
    • 2002
  • The two dimensional size effect of specimen gauge section ($length{\;}{\times}{\;}width$) was investigated on the compressive behavior of a T300/924 $\textrm{[}45/-45/0/90\textrm{]}_{3s}$, carbon fiber-epoxy laminate. A modified ICSTM compression test fixture was used together with an anti-buckling device to test 3mm thick specimens with a $30mm{\;}{\times}{\;}30mm,{\;}50mm{\;}{\times}{\;}50mm,{\;}70mm{\;}{\times}{\;}70mm{\;}and{\;}90mm{\;}{\times}{\;}90mm$ gauge length by width section. In all cases failure was sudden and occurred mainly within the gauge length. Post failure examination suggests that $0^{\circ}$ fiber microbuckling is the critical damage mechanism that causes final failure. This is the matrix dominated failure mode and its triggering depends very much on initial fiber waviness. It is suggested that manufacturing process and quality may play a significant role in determining the compressive strength. When the anti-buckling device was used on specimens, it was showed that the compressive strength with the device was slightly greater than that without the device due to surface friction between the specimen and the device by pretoque in bolts of the device. In the analysis result on influence of the anti-buckling device using the finite element method, it was found that the compressive strength with the anti-buckling device by loaded bolts was about 7% higher than actual compressive strength. Additionally, compressive tests on specimen with an open hole were performed. The local stress concentration arising from the hole dominates the strength of the laminate rather than the stresses in the bulk of the material. It is observed that the remote failure stress decreases with increasing hole size and specimen width but is generally well above the value one might predict from the elastic stress concentration factor. This suggests that the material is not ideally brittle and some stress relief occurs around the hole. X-ray radiography reveals that damage in the form of fiber microbuckling and delamination initiates at the edge of the hole at approximately 80% of the failure load and extends stably under increasing load before becoming unstable at a critical length of 2-3mm (depends on specimen geometry). This damage growth and failure are analysed by a linear cohesive zone model. Using the independently measured laminate parameters of unnotched compressive strength and in-plane fracture toughness the model predicts successfully the notched strength as a function of hole size and width.