• Title/Summary/Keyword: code size

Search Result 1,095, Processing Time 0.037 seconds

Calculation of Dose Distribution for SBRT Patient Using Geant4 Simulation Code (Geant4 전산모사 코드를 이용한 SBRT 환자의 선량분포 계산)

  • Kang, Jeongku;Lee, Jeongok;Lee, Dong Joon
    • Progress in Medical Physics
    • /
    • v.26 no.1
    • /
    • pp.36-41
    • /
    • 2015
  • The Monte Carlo based dose calculation program for stereotactic body radiotherapy was developed in this study. The Geant4 toolkit widely used in the radiotherapy was used for this study. The photon energy spectrum of the medical linac studied in the previous research was applied for the patient dose calculations. The geometry of the radiation fields defined by multi-leaf collimators were taken into account in the PrimaryGeneratorAction class of the Geant4 code. The total of 8 fields were demonstrated in the patient dose calculations, where rotation matrix as a function of gantry angle was used for the determination of the source positions. The DicomHandler class converted the binary file format of the DICOM data containing the matrix number, pixel size, endian type, HU number, bit size, padding value and high bits order to the ASCII file format. The patient phantom was constructed using the converted ASCII file. The EGSnrc code was used to compare the calculation efficiency of the material data.

Compressive Behavior of Concrete with Loading and Heating (가열 및 재하에 의한 콘크리트의 압축거동)

  • Kim, Gyu-Yong;Jung, Sang-Hwa;Lee, Tae-Gyu;Kim, Young-Sun;Nam, Jeong-Soo
    • Journal of the Korea institute for structural maintenance and inspection
    • /
    • v.14 no.4
    • /
    • pp.119-125
    • /
    • 2010
  • The performance deformation of concrete can be caused by many factors such as load, thermal strain and creep at high temperature. Japan, Europe and America have been doing various experimental studies to solve these problems about thermal properties of concrete at high temperature, each study has generated different results due to a heating methods, heating hours, size of specimens and performance of a the loading, heating method, size of specimen and heating machine. There has been no unified experimental method so far. Therefore, this study reviewed experimental studies on the strength performance of concrete subject to heating and loading method. As a result, compressive strength of specimen prestressed increase in the temperature range of between $100^{\circ}C$ and about $400^{\circ}C$. Also, results can be analyzed as compare equation of compressive strength at elevated temperature with CEN and CEB code.

Accurate theoretical modeling and code prediction of the punching shear failure capacity of reinforced concrete slabs

  • Rajai Z. Al-Rousan;Bara'a R. Alnemrawi
    • Steel and Composite Structures
    • /
    • v.52 no.4
    • /
    • pp.419-434
    • /
    • 2024
  • A flat slab is a structural system where columns directly support it without the presence of beam elements. However, despite its wide advantages, this structural system undergoes a major deficiency where stresses are concentrated around the column perimeter, resulting in the progressive collapse of the entire structure as a result of losing the shear transfer mechanisms at the cracked interface. Predicting the punching shear capacity of RC flat slabs is a challenging problem where the factors contributing to the overall slab strength vary broadly in their significance and effect extent. This study proposed a new expression for predicting the slab's capacity in punching shear using a nonuniform concrete tensile stress distribution assumption to capture, as well as possible, the induced strain effect within a thick RC flat slab. Therefore, the overall punching shear capacity is composed of three parts: concrete, aggregate interlock, and dowel action contributions. The factor of the shear span-to-depth ratio (a_v/d) was introduced in the concrete contribution in addition to the aggregate interlock part using the maximum aggregate size. Other significant factors were considered, including the concrete type, concrete grade, size factor, and the flexural reinforcement dowel action. The efficiency of the proposed model was examined using 86 points of published experimental data from 19 studies and compared with five code standards (ACI318, EC2, MC2010, CSA A23.3, and JSCE). The obtained results revealed the efficiency and accuracy of the model prediction, where a covariance value of 4.95% was found, compared to (13.67, 14.05, 15.83, 19.67, and 20.45) % for the (ACI318, CSA A23.3, MC2010, EC2, and JSCE), respectively.

Investigation of the model scale and particle size effects on the point load index and tensile strength of concrete using particle flow code

  • Haeri, Hadi;Sarfarazi, Vahab;Zhu, Zheming;Hedayat, Ahmadreza;Marji, Mohammad Fatehi
    • Structural Engineering and Mechanics
    • /
    • v.66 no.4
    • /
    • pp.445-452
    • /
    • 2018
  • In this paper the effects of particle size and model scale of concrete have been investigated on point load index, tensile strength, and the failure processes using a PFC2D numerical modeling study. Circular and semi-circular specimens of concrete were numerically modeled using the same particle size, 0.27 mm, but with different model diameters of 75 mm, 54 mm, 25 mm, and 12.5 mm. In addition, circular and semi-circular models with the diameter of 27 mm and particle sizes of 0.27 mm, 0.47 mm, 0.67 mm, 0.87 mm, 1.07 mm, and 1.27 mm were simulated to determine whether they can match the experimental observations from point load and Brazilian tests. The numerical modeling results show that the failure patterns are influenced by the model scale and particle size, as expected. Both Is(50) and Brazilian tensile strength values increased as the model diameter and particle sizes increased. The ratio of Brazilian tensile strength to Is(50) showed a reduction as the particle size increased but did not change with the increase in the model scale.

EVALUATION OF SHEAR BEHAVIOR OF LARGE GRANULAR MATERIALS WITH DIFFERENT PARTICLE SIZES BY TRIAXIAL TEST AND NUMERICAL SIMULATION

  • Kim, Bum-Joo;Sagong, Myung
    • Proceedings of the Korean Geotechical Society Conference
    • /
    • 2010.09c
    • /
    • pp.55-60
    • /
    • 2010
  • Rockfill zones in CFRD consist typically of large granular materials, usually the maximum particle size up to several meters, which makes laboratory testing to determine the mechanical properties of rockfill difficult. Commonly, the design strength of the rockfills is obtained by scaling down the original rockfill materials and performing laboratory strength tests for the reduced size materials. The objective of the present study is to investigate the effect of particle size on the shear behavior and the strength for granular materials. A series of large-scale triaxial tests was conducted on large granular materials with the maximum particle size varying from 20 to 50mm. The test results showed that overall shear behaviors were similar between the samples with different particle sizes while there were slight differences in the magnitudes of the peak shear stress between the samples. In addition, a simulation of the granular material with the max. particle size of 20mm was performed using DEM code, $PFC^{2D}$, and compared with the test results. The deviatoric stress versus strain behaviors of experimental and numerical tests were found to be matched well up to the peak stress state.

  • PDF

An Enhanced Function Point Model for Software Size Estimation: Micro-FP Model (소프트웨어 규모산정을 위한 기능점수 개선 Micro-FP 모형의 제안)

  • Ahn, Yeon-S.
    • Journal of the Korea Society of Computer and Information
    • /
    • v.14 no.12
    • /
    • pp.225-232
    • /
    • 2009
  • Function Point Method have been applied to measure software size estimation in industry because it supports to estimate the software's size by user's view not developer's. However, the current function point method has some problems for example complexity's upper limit etc. So, In this paper, an enhanced function point model. Micro FP model, was suggested. Using this model, software effort estimation can be more efficiently because this model has some regression equation. This model specially can be applied to estimate in detail the large application system's size Analysis results show that measured software size by this Micro FP model has the advantage with more correlative between the one of LOC, as of 10 applications operated in an large organization.

A design and implementation of VHDL-to-C mapping in the VHDL compiler back-end (VHDL 컴파일러 후반부의 VHDL-to-C 사상에 관한 설계 및 구현)

  • 공진흥;고형일
    • Journal of the Korean Institute of Telematics and Electronics C
    • /
    • v.35C no.12
    • /
    • pp.1-12
    • /
    • 1998
  • In this paper, a design and implementation of VHDL-to-C mapping in the VHDL compiler back-end is described. The analyzed data in an intermediate format(IF), produced by the compiler front-end, is transformed into a C-code model of VHDL semantics by the VHDL-to-C mapper. The C-code model for VHDL semantics is based on a functional template, including declaration, elaboration, initialization and execution parts. The mapping is carried out by utilizing C mapping templates of 129 types classified by mapping units and functional semantics, and iterative algorithms, which are combined with terminal information, to produce C codes. In order to generate the C program, the C codes are output to the functional template either directly or by combining the higher mapping result with intermediate mapping codes in the data queue. In experiments, it is shown that the VHDL-to-C mapper could completely deal with the VHDL analyzed programs from the compiler front-end, which deal with about 96% of major VHDL syntactic programs in the Validation Suite. As for the performance, it is found that the code size of VHDL-to-C is less than that of interpreter and worse than direct code compiler of which generated code is increased more rapidly with the size of VHDL design, and that the VHDL-to-C timing overhead is needed to be improved by the optimized implementation of mapping mechanism.

  • PDF

A Study on the Improvement of Source Code Static Analysis Using Machine Learning (기계학습을 이용한 소스코드 정적 분석 개선에 관한 연구)

  • Park, Yang-Hwan;Choi, Jin-Young
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.30 no.6
    • /
    • pp.1131-1139
    • /
    • 2020
  • The static analysis of the source code is to find the remaining security weaknesses for a wide range of source codes. The static analysis tool is used to check the result, and the static analysis expert performs spying and false detection analysis on the result. In this process, the amount of analysis is large and the rate of false positives is high, so a lot of time and effort is required, and a method of efficient analysis is required. In addition, it is rare for experts to analyze only the source code of the line where the defect occurred when performing positive/false detection analysis. Depending on the type of defect, the surrounding source code is analyzed together and the final analysis result is delivered. In order to solve the difficulty of experts discriminating positive and false positives using these static analysis tools, this paper proposes a method of determining whether or not the security weakness found by the static analysis tools is a spy detection through artificial intelligence rather than an expert. In addition, the optimal size was confirmed through an experiment to see how the size of the training data (source code around the defects) used for such machine learning affects the performance. This result is expected to help the static analysis expert's job of classifying positive and false positives after static analysis.

Quality Visualization of Quality Metric Indicators based on Table Normalization of Static Code Building Information (정적 코드 내부 정보의 테이블 정규화를 통한 품질 메트릭 지표들의 가시화를 위한 추출 메커니즘)

  • Chansol Park;So Young Moon;R. Young Chul Kim
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.12 no.5
    • /
    • pp.199-206
    • /
    • 2023
  • The current software becomes the huge size of source codes. Therefore it is increasing the importance and necessity of static analysis for high-quality product. With static analysis of the code, it needs to identify the defect and complexity of the code. Through visualizing these problems, we make it guild for developers and stakeholders to understand these problems in the source codes. Our previous visualization research focused only on the process of storing information of the results of static analysis into the Database tables, querying the calculations for quality indicators (CK Metrics, Coupling, Number of function calls, Bad-smell), and then finally visualizing the extracted information. This approach has some limitations in that it takes a lot of time and space to analyze a code using information extracted from it through static analysis. That is since the tables are not normalized, it may occur to spend space and time when the tables(classes, functions, attributes, Etc.) are joined to extract information inside the code. To solve these problems, we propose a regularized design of the database tables, an extraction mechanism for quality metric indicators inside the code, and then a visualization with the extracted quality indicators on the code. Through this mechanism, we expect that the code visualization process will be optimized and that developers will be able to guide the modules that need refactoring. In the future, we will conduct learning of some parts of this process.

Comparison of Radioactive Waste Transportation Risk Assessment Using Deterministic and Probabilistic Methods (결정론적 및 확률론적 방법을 이용한 방사성폐기물 운반 위험도 평가 비교·분석 )

  • Min Woo Kwak;Hyeok Jae Kim;Ga Eun Oh;Shin Dong Lee;Kwang Pyo Kim
    • Journal of Radiation Industry
    • /
    • v.17 no.1
    • /
    • pp.83-92
    • /
    • 2023
  • When assessing the risk of radioactive wastes transportation on land, computer codes such as RADTRAN and RISKIND are used as deterministic methods. Transportation risk assessment using the deterministic method requires a relatively short assessment time. On the other hand, transportation risk assessment using the probabilistic method requires a relatively long assessment time, but produces more reliable results. Therefore, a study is needed to evaluate the exposure dose using a deterministic method that can be evaluated relatively quickly, and to compare and analyze the exposure dose result using a probabilistic method. The purpose of this study is to evaluate the exposure dose during transportation of radioactive wastes using deterministic and probabilistic methods, and to compare and analyze them. For this purpose, the main exposure factors were selected and various exposure situations were set. The distance between the radioactive waste and the receptor, the size of the package, and the speed of vehicle were selected as the main exposure factors. The exposure situation was largely divided into when the radioactive wastes were stationary and when they were passing. And the dose (rate) model of the deterministic overland transportation risk assessment computer code was analyzed. Finally, the deterministic method of the RADTRAN computer code and the RISKIND computer code and the probabilistic method of the MCNP 6 computer code were used to evaluate the exposure dose in various exposure situations during transportation of radioactive wastes. Then we compared and analyzed them. As a result of the evaluation, the tendency of the exposure dose (rate) was similar when the radioactive wastes were stationary and passing. For the same situation, the evaluation results of the RADTRAN computer code were generally more conservative than the results of the RISKIND computer code and the MCNP 6 computer code. The evaluation results of the RISKIND computer code and the MCNP 6 computer code were relatively similar. The results of this study are expected to be used as basic data for establishing the radioactive wastes transportation risk assessment system in Korea in the future.