• Title/Summary/Keyword: Benchmark Model Test

Search Result 127, Processing Time 0.025 seconds

The Development of an Adjustable Dual-Level Load Limiter (적응형 듀얼레벨 로드리미터 개발)

  • Lee, In-Beom;Kang, Shin-You;Kim, Seock-Hyun;Ryoo, Won-Wha
    • Transactions of the Korean Society of Mechanical Engineers A
    • /
    • v.35 no.10
    • /
    • pp.1187-1191
    • /
    • 2011
  • In this paper, the development of an adjustable load limiter is presented, which is a component of the seat belt. The adjustable load limiter is loaded at different levels for varied weights and heights of occupant. The recent regulation FMVSS 208 demands strict safety standards for different percentiles of dummy size. In this work, high- and low-level load conditions are proposed according to dummy scale and thoracic injury criteria. The suggested load conditions were verified by performing a sled test using the benchmark model. A dual-level load limiter has been developed on the basis of these tests. Experiments were conducted on the product performance, and finite element analysis was carried out; the results confirmed the points for improvement.

A new phantom to evaluate the tissue dissolution ability of endodontic irrigants and activating devices

  • Kimia Khoshroo ;Brinda Shah;Alexander Johnson ;John Baeten ;Katherine Barry;Mohammadreza Tahriri ;Mohamed S. Ibrahim;Lobat Tayebi
    • Restorative Dentistry and Endodontics
    • /
    • v.45 no.4
    • /
    • pp.45.1-45.8
    • /
    • 2020
  • Objective: The aim of this study was to introduce a gelatin/bovine serum albumin (BSA) tissue standard, which provides dissolution properties identical to those of biological tissues. Further, the study evaluated whether the utilization of endodontic activating devices led to enhanced phantom dissolution rates. Materials and Methods: Bovine pulp tissue was obtained to determine a benchmark of tissue dissolution. The surface area and mass of samples were held constant while the ratio of gelatin and BSA were varied, ranging from 7.5% to 10% gelatin and 5% BSA. Each sample was placed in an individual test tube that was filled with an appropriate sodium hypochlorite solution for 1, 3, and 5 minutes, and then removed from the solution, blotted dry, and weighed again. The remaining tissue was calculated as the percent of initial tissue to determine the tissue dissolution rate. A radiopaque agent (sodium diatrizoate) and a fluorescent dye (methylene blue) were added to the phantom to allow easy quantification of phantom dissolution in a canal block model when activated using ultrasonic (EndoUltra) or sonic (EndoActivator) energy. Results: The 9% gelatin + 5% BSA phantom showed statistically equivalent dissolution to bovine pulp tissue at all time intervals. Furthermore, the EndoUltra yielded significantly more phantom dissolution in the canal block than the EndoActivator or syringe irrigation. Conclusions: Our phantom is comparable to biological tissue in terms of tissue dissolution and could be utilized for in vitro tests due to its injectability and detectability.

An Iterative Data-Flow Optimal Scheduling Algorithm based on Genetic Algorithm for High-Performance Multiprocessor (고성능 멀티프로세서를 위한 유전 알고리즘 기반의 반복 데이터흐름 최적화 스케줄링 알고리즘)

  • Chang, Jeong-Uk;Lin, Chi-Ho
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.15 no.6
    • /
    • pp.115-121
    • /
    • 2015
  • In this paper, we proposed an iterative data-flow optimal scheduling algorithm based on genetic algorithm for high-performance multiprocessor. The basic hardware model can be extended to include detailed features of the multiprocessor architecture. This is illustrated by implementing a hardware model that requires routing the data transfers over a communication network with a limited capacity. The scheduling method consists of three layers. In the top layer a genetic algorithm takes care of the optimization. It generates different permutations of operations, that are passed on to the middle layer. The global scheduling makes the main scheduling decisions based on a permutation of operations. Details of the hardware model are not considered in this layer. This is done in the bottom layer by the black-box scheduling. It completes the scheduling of an operation and ensures that the detailed hardware model is obeyed. Both scheduling method can insert cycles in the schedule to ensure that a valid schedule is always found quickly. In order to test the performance of the scheduling method, the results of benchmark of the five filters show that the scheduling method is able to find good quality schedules in reasonable time.

Deep-Learning Seismic Inversion using Laplace-domain wavefields (라플라스 영역 파동장을 이용한 딥러닝 탄성파 역산)

  • Jun Hyeon Jo;Wansoo Ha
    • Geophysics and Geophysical Exploration
    • /
    • v.26 no.2
    • /
    • pp.84-93
    • /
    • 2023
  • The supervised learning-based deep-learning seismic inversion techniques have demonstrated successful performance in synthetic data examples targeting small-scale areas. The supervised learning-based deep-learning seismic inversion uses time-domain wavefields as input and subsurface velocity models as output. Because the time-domain wavefields contain various types of wave information, the data size is considerably large. Therefore, research applying supervised learning-based deep-learning seismic inversion trained with a significant amount of field-scale data has not yet been conducted. In this study, we predict subsurface velocity models using Laplace-domain wavefields as input instead of time-domain wavefields to apply a supervised learning-based deep-learning seismic inversion technique to field-scale data. Using Laplace-domain wavefields instead of time-domain wavefields significantly reduces the size of the input data, thereby accelerating the neural network training, although the resolution of the results is reduced. Additionally, a large grid interval can be used to efficiently predict the velocity model of the field data size, and the results obtained can be used as the initial model for subsequent inversions. The neural network is trained using only synthetic data by generating a massive synthetic velocity model and Laplace-domain wavefields of the same size as the field-scale data. In addition, we adopt a towed-streamer acquisition geometry to simulate a marine seismic survey. Testing the trained network on numerical examples using the test data and a benchmark model yielded appropriate background velocity models.

A Numerical Study on the Step 0 Benchmark Test in Task C of DECOVALEX-2023: Simulation for Thermo-Hydro-Mechanical Coupled Behavior by Using OGS-FLAC (DECOVALEX-2023 Task C 내 Step 0 벤치마크 수치해석 연구: OGS-FLAC을 활용한 열-수리-역학 복합거동 수치해석)

  • Kim, Taehyun;Park, Chan-Hee;Lee, Changsoo;Kim, Jin-Seop
    • Tunnel and Underground Space
    • /
    • v.31 no.6
    • /
    • pp.610-622
    • /
    • 2021
  • The DECOVALEX project is one of the representative international cooperative projects to enhance the understanding of the complex Thermo-Hydro-Mechanical-Chemical(THMC) coupled behavior in the high-level radioactive waste disposal system based on the numerical simulation. DECOVALEX-2023 is the current phase consisting of 7 tasks, and Task C aims to model the THM coupled behavior in the disposal system based on the Full-scale Emplacement (FE) experiment at the Mont-Terri underground rock laboratory. This study performs the numerical simulation based on the OGS-FLAC developed for the current study. In the numerical model, we emplaced the heater with constant power horizontally based on the FE experiment and monitored the pressure development, temperature increase, and mechanical deformation at the specific monitoring points. We monitored the capillary pressure as the primary effect inducing the flow in the buffer system, and thermal stress and pressurization were dominant in the surrounding rocks' area. The results will also be compared and validated with the other participating groups and the experimental data further.

Evaluating SR-Based Reinforcement Learning Algorithm Under the Highly Uncertain Decision Task (불확실성이 높은 의사결정 환경에서 SR 기반 강화학습 알고리즘의 성능 분석)

  • Kim, So Hyeon;Lee, Jee Hang
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.11 no.8
    • /
    • pp.331-338
    • /
    • 2022
  • Successor representation (SR) is a model of human reinforcement learning (RL) mimicking the underlying mechanism of hippocampal cells constructing cognitive maps. SR utilizes these learned features to adaptively respond to the frequent reward changes. In this paper, we evaluated the performance of SR under the context where changes in latent variables of environments trigger the reward structure changes. For a benchmark test, we adopted SR-Dyna, an integration of SR into goal-driven Dyna RL algorithm in the 2-stage Markov Decision Task (MDT) in which we can intentionally manipulate the latent variables - state transition uncertainty and goal-condition. To precisely investigate the characteristics of SR, we conducted the experiments while controlling each latent variable that affects the changes in reward structure. Evaluation results showed that SR-Dyna could learn to respond to the reward changes in relation to the changes in latent variables, but could not learn rapidly in that situation. This brings about the necessity to build more robust RL models that can rapidly learn to respond to the frequent changes in the environment in which latent variables and reward structure change at the same time.

Agency Costs of Clothing Companies with Famous Brand (유명 의류 상호 기업의 대리인 비용에 관한 연구)

  • Gong, Kyung-Tae
    • Management & Information Systems Review
    • /
    • v.36 no.4
    • /
    • pp.21-32
    • /
    • 2017
  • Motivated by the recent cases of negligent social responsibility as manifested by foreign luxury fashion brands in Korea, this study investigates whether agency costs depend on the sustainability of different types of corporate governance. Agency costs refer either to vertical costs arising from the relationship between stockholders and managers, or to horizontal costs associated with the potential conflicts between majority and minority stockholders. The firms with luxury fashion brand could spend large sums of money on maintenance of magnificent brand image, thereby increasing the agency cost. On the contrary, the firms may hold down wasteful spending to report a gaudily financial achievement. This results in mitigation of the agency cost. Agency costs are measured by the value of the principal component. First, three ratios are constructed: asset turnover, operating expense to sales, and earnings before interest, tax, and depreciation. Then, the scores of each of these ratios for individual firms in the sample are differenced from the ratios for the benchmark firm of S-OIL. S-OIL was designated as the best superior governance model firm for 2013 by CGS. We perform regression analysis of each agency cost index, luxury fashion brand dummy and a set of control variables. The regression results indicate that the agency costs of the firms with luxury fashion brand exceed those of control group in the fashion industry in the part of operating expenses, but the agency cost falls short of those of control group in the part of EBITD, thus the aggregate agency costs are not differential of those of the control group. In sensitivity test, the results are same that the agency cost of the firms are higher than those of the matching control group with PSM(propensity matching method). These results are corroborated by an additional analysis comparing the group of the companies with the best brands with the control group. The results raise doubts about the effectiveness of management of the firms with luxury fashion brand. This study has a limitation that the research has performed only for 2013 and this paper suggests that there is room for improvement in the current research methodology.

  • PDF