• Title/Summary/Keyword: 반복도

Search Result 14,228, Processing Time 0.036 seconds

Shear Damage Behavior of Reinforced Concrete Beams under Fatigue Loads (반복하중을 받는 철근콘크리트보의 전단피로손상거동)

  • 오병환;한승환;이형준;김지상;신호상
    • Magazine of the Korea Concrete Institute
    • /
    • v.10 no.1
    • /
    • pp.143-151
    • /
    • 1998
  • 최근들어 반복하중에 의한 철근콘크리트 구조물의 손상이 자주 발견되고 있으며 교량 등의 구조물 등은 때때로 과적차량에 의한 초과하중을 받아 이러한 피로손상이 심화되고 있다. 본 연구에서는 이러한 반복 하중을 받는 철근 콘크리트보의 누적피로손상에 대한 실험적 연구룰 수행하여 피로하중에 의한 철근콘크리트보의 손상과정을 규명하였다. 실험 변수를 전단철근의 양과 반복되는 하중의 크기 및 반복횟수로 하여 실험부재를 제작하였으며, 하중제어에 의한 휨시험법에 의해 3Hz의 반복하중을 시편에 재하하였다. 사인장 균열하중과 사인장 균열 후 반복하중에서의 보의 손상누적거동 즉 처짐. 전단철근의 변형도, 에너지 손실 등의 변화를 실험적으로 평가하였으며, 이를 통하여 반복하중에 의한 누적손상에 의해 철근 콘크리트보의처짐 및 전단변형도가 초기하중상태에서는 급격히 증가하다가 이후 점진적으로 증가하는 것을 규명하였다. 본 연구의 결과는 사용하중상태에서 점진적으로 발생할 수 있는 피로손상의 누적과정을 기술하여 주고 있다.

A Predicted Newton-Raphson Iterative Method utilizing Neural Network (신경회로망을 이용한 예측 뉴턴-랩손 반복계산기법)

  • Kim, Jong-Hoon;Kim, Yong-Hyup
    • Proceedings of the KSME Conference
    • /
    • 2000.04a
    • /
    • pp.339-344
    • /
    • 2000
  • Newton-Raphson 기법은 구조물의 비선형 해석에 널리 쓰이는 반복계산기법이다. 비선형 해석을 위한 반복계산기법은 컴퓨터의 발달을 감안해도 상당한 계산시간이 소요된다. 본 논문에서는 신경회로망 예측을 사용한 Predicted Newton-Raphson 반복계산기법을 제안하였다. 통상적인 Newton-Raphson 기법은 이전스텝에서 수렴된 점으로부터 현재 스텝의 반복계산을 시작하는 반면 제시된 방법은 현재 스텝 수렴해에 대한 예측점에서 반복계산을 시작한다. 수렴해에 대한 예측은 신경회로망을 사용하여 이전 스텝 수렴해의 과거경향을 파악한 후 구한다. 반복계산 시작점이 수렴점에 보다 근접하여 위치하므로 수렴속도가 빨라지게 되고 허용되는 하중스텝의 크기가 커지게 된다. 또한 반복계산의 시작점으로부터 이루어지는 계산과정은 통상적인 Newton-Raphson 기법과 동일하므로 기존의 Newton-Raphson 기법과 정확히 일치하는 수렴해를 구할 수 있다. 구조물의 정적 비선형 거동에 대한 수치해석을 통하여 modified Newton-Raphson 기법과 제시된 Predicted Newton=Raphson 기법의 정확성과 효율성을 비교하였다. 제시된 Predicted Newton-Raphson 기법은 modified Newton-Raphson 기법과 동일한 해를 산출하면서도 계산상의 효율성이 매우 큼을 확인할 수 있었다.

  • PDF

Efficient stop criterion algorithm of the turbo code using the maximum sign change of the LLR (LLR 최대부호변화를 적용한 터보부호의 효율적인 반복중단 알고리즘)

  • Shim Byoung-Sup;Jeong Dae-Ho;Lim Soon-Ja;Kim Tae-Hyung;Kim Hwan-Yong
    • Journal of the Institute of Electronics Engineers of Korea TC
    • /
    • v.43 no.5 s.347
    • /
    • pp.121-127
    • /
    • 2006
  • It is well known the fact that turbo codes has better performance as the number of iteration and the interleaver size increases in the AWGN channel environment. However, as the number of iteration and the interleaver size are increased, it is required much delay and computation for iterative decoding. Therefore, it is important to devise an efficient criterion to stop the iteration process and prevent unnecessary computations and decoding delay. In this paper, it proposes the efficient stop criterion algorithm for turbo codes using the maximum sign change of LLR. It is verifying that the proposal variable iterative decoding controller can be reduced the average iterative decoding number compared to conventional schemes with a negligible degradation of the error performance.

New stop criterion using the absolute mean value of LLR difference for Turbo Codes (LLR 차의 절대 평균값을 이용한 터보부호의 새로운 반복중단 알고리즘)

  • Shim ByoungSup;Lee Wanbum;Jeong DaeHo;Lim SoonJa;Kim TaeHyung;Kim HwanYong
    • Journal of the Institute of Electronics Engineers of Korea TC
    • /
    • v.42 no.5 s.335
    • /
    • pp.39-46
    • /
    • 2005
  • It is well known the fact that turbo codes has better performance as the number of iteration and the interleaver size increases in the AWGN channel environment. However, as the number of iteration and the interleaver size are increased, it is required much delay and computation for iterative decoding. Therefore, it is important to devise an efficient criterion to stop the iteration process and prevent unnecessary computations and decoding delay. In this paper, it proposes the efficient iterative decoding stop criterion using the absolute mean value of LLR difference. It is verifying that the proposal iterative decoding stop criterion can be reduced the average iterative decoding number compared to conventional schemes with a negligible degradation of the error performance.

An Analytical Procedure to Estimate Non-recurrent Congestion caused by Freeway Accidents (고속도로 교통사고로 인한 비 반복 혼잡 추정 연구)

  • Jeong, Yeon-Sik;Jo, Han-Seon;Kim, Ju-Yeong
    • Journal of Korean Society of Transportation
    • /
    • v.28 no.2
    • /
    • pp.45-52
    • /
    • 2010
  • The objective of this paper is to develop and apply a method that estimates the amount of traffic congestion (vehicle hours of delay) caused by traffic accidents that occur on freeways in Korea. A key feature of this research is the development of a method to separate the non- recurrent delay from any recurrent delay that is present on the road at the time and place of a reported accident. The main idea to separate these two delays is to use the speed difference between speed under accident condition and speed under normal flow condition. For the case study application, two datasets were combined to accomplish the objective of the study: (1) accident data and (2) traffic flow data. Eventually, the results can be useful for the performance evaluation of accident reduction program, for strategic plans to cope with congestion caused by traffic accidents, and for rectification of the estimation method for traffic congestion costs.

Repeated Reading Experience of University Students (대학생들의 반복독서 경험에 관한 연구)

  • Lee, Seung-Chae
    • Journal of the Korean Society for Library and Information Science
    • /
    • v.41 no.2
    • /
    • pp.161-180
    • /
    • 2007
  • The goal of this study is to examine what sort of books people read repeatedly, what are the different preferences between men and women, how much the repeatedly-read books are related to the most memorable books and how the repeatedly-read books are connected to reading habits. A questionnaire was provided to college students and their repeated reading experiences were searched. The results of the statistical analysis are summarized : 1) Most college students have experienced repeated reading more than twice. 2) The number of college students who have experienced repeated reading twice is the highest and next, those who have read repeatedly 3 times. Also, the number of times of repeated reading tends to be similar between male and female students. 3) The books which many students read repeatedly more than twice are : a) Romance of 3 Kingdoms b) Little Prince c) Da Vinci Code d) Alchemist e) Chinese nine spine stickle back f) Meu Pe do Larania Lima g) Harry potter. 4) About half of the students have read the most memorable books many times. 5) The importance of books was evaluated on the basis of the number of repeated readers and the number of readings. The order of the important books is Three Romans of 3 kingdoms, Little Prince, Da Vinci Code, Meu Pe do Larania Lima, Tuesdays with Mollie, Harry Potter, and The Myth of Greece and Rome. 6) The preferred books by male college students are mostly the stories of fighters while female likes books that contain deep emotion and morality. More than half of the males' repeated reading was Romance of 3 Kingdoms while the preferred books of females are distributed widely.

Design of Structure for Loop Bound Analysis based on PS-Block (PS-Block 구조 기반의 반복횟수 분석 구조 설계)

  • Kim Yun-Kwan;Shin Won;Kim Tae-Wan;Chang Chun-Hyon
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2006.05a
    • /
    • pp.195-198
    • /
    • 2006
  • 실시간 프로그램은 항공기, 선박, 철도 예매 시스템 등 다양한 분야에서 사용되고 있으며, 그 개발자는 논리적, 시간적 정확성을 고려해야 한다. 시간적 정확성은 실시간 프로그램에서 가장 중요한 부분이며, 이를 위한 데드라인은 개발자에 의해 정의된다. 따라서 개발자는 데드라인의 정의를 위하여 기준점을 제시할 수 있는 정적 실행시간 분석이 필요하다. 정적 실행시간 분석에서 프로그램의 반복횟수의 분석은 큰 비중을 차지한다. 기존 연구에서 반복횟수의 분석은 사용자 입력에 의존하였고 현재 반복횟수 분석을 자동화하는 연구가 진행 중이다. 하지만 반복횟수의 분석은 반복횟수에 영향을 주는 제어변수의 결정정책에 따라 결과가 달라진다. 따라서 본 논문에서는 PS-Block구조를 기반으로 반복 횟수에 영향을 주는 제어변수들을 종합적으로 분석하여 보다 정밀하고 사용자의 입력을 자동화하는 반복횟수의 분석이 가능한 방법을 제시한다. 이로써 정적 실행시간 분석은 반복횟수의 정밀한 분석을 통하여 분석 결과의 정확도를 높이고 신뢰성을 향상시킬 수 있다.

  • PDF

Modelling for Repeated Measures Data with Composite Covariance Structures (복합구조 반복측정자료에 대한 모형 연구)

  • Lee, Jae-Hoon;Park, Tae-Sung
    • The Korean Journal of Applied Statistics
    • /
    • v.22 no.6
    • /
    • pp.1265-1275
    • /
    • 2009
  • In this paper, we investigated the composite covariance structure models for repeated measures data with multiple repeat factors. When the number of repeat factors is more than three, it is infeasible to fit the composite covariance models using the existing statistical packages. In order to fit the composite covariance structure models to real data, we proposed two approaches: the dimension reduction approach for repeat factors and the random effect model approximation approach. Our proposed approaches were illustrated by using the blood pressure data with three repeat factors obtained from 883 subjects.

The Effect of Repetitive Compression with Constant Stress on the Compressive Properties of Foams (일정 응력 반복압축이 발포체의 압축 특성에 미치는 영향)

  • Park, Cha-Cheol
    • Elastomers and Composites
    • /
    • v.40 no.4
    • /
    • pp.258-265
    • /
    • 2005
  • To study the compressive stress, recovery force and permanent strain of foams for footwear midsole, polyurethane(PU), phylon(PH) and injection phylon(IP) foams were repetitively compressed with constant compressive stress. Maximum compressive stress of PU did not decrease with repetitive compression on the constant compressive stress, but that of IP largely decreased. Engineering strain of foams were formed by repetitively compressing the three types of foam. The engineering strain of PU was smaller than that of IP and PH. Compressive stress and recovery force of IP and PH at certain strain were decreased with repetitive compression, but that of PU was not noticeably changed.

Efficient Determination of Iteration Number for Algebraic Reconstruction Technique in CT (CT의 대수적재구성기법에서 효율적인 반복 횟수 결정)

  • Joon-Min, Gil;Kwon Su, Chon
    • Journal of the Korean Society of Radiology
    • /
    • v.17 no.1
    • /
    • pp.141-148
    • /
    • 2023
  • The algebraic reconstruction technique is one of the reconstruction methods in CT and shows good image quality against noise-dominant conditions. The number of iteration is one of the key factors determining the execution time for the algebraic reconstruction technique. However, there are some rules for determining the number of iterations that result in more than a few hundred iterations. Thus, the rules are difficult to apply in practice. In this study, we proposed a method to determine the number of iterations for practical applications. The reconstructed image quality shows slow convergence as the number of iterations increases. Image quality 𝜖 < 0.001 was used to determine the optimal number of iteration. The Shepp-Logan head phantom was used to obtain noise-free projection and projections with noise for 360, 720, and 1440 views were obtained using Geant4 Monte Carlo simulation that has the same geometry dimension as a clinic CT system. Images reconstructed by around 10 iterations within the stop condition showed good quality. The method for determining the iteration number is an efficient way of replacing the best image-quality-based method, which brings over a few hundred iterations.