• Title/Summary/Keyword: Choice Simulation

Search Result 348, Processing Time 0.027 seconds

Limit analysis of seismic collapse for shallow tunnel in inhomogeneous ground

  • Guo, Zihong;Liu, Xinrong;Zhu, Zhanyuan
    • Geomechanics and Engineering
    • /
    • v.24 no.5
    • /
    • pp.491-503
    • /
    • 2021
  • Shallow tunnels are vulnerable to earthquakes, and shallow ground is usually inhomogeneous. Based on the limit equilibrium method and variational principle, a solution for the seismic collapse mechanism of shallow tunnel in inhomogeneous ground is presented. And the finite difference method is employed to compare with the analytical solution. It shows that the analytical results are conservative when the horizontal and vertical stresses equal the static earth pressure and zero at vault section, respectively. The safety factor of shallow tunnel changes greatly during an earthquake. Hence, the cyclic loading characteristics should be considered to evaluate tunnel stability. And the curve sliding surface agrees with the numerical simulation and previous studies. To save time and ensure accuracy, the curve sliding surface with 2 undetermined constants is a good choice to analyze shallow tunnel stability. Parameter analysis demonstrates that the horizontal semiaxis, acceleration, ground cohesion and homogeneity affect tunnel stability greatly, and the horizontal semiaxis, vertical semiaxis, tunnel depth and ground homogeneity have obvious influence on tunnel sliding surface. It concludes that the most applicable approaches to enhance tunnel stability are reducing the horizontal semiaxis, strengthening cohesion and setting the tunnel into good ground.

Time uncertainty analysis method for level 2 human reliability analysis of severe accident management strategies

  • Suh, Young A;Kim, Jaewhan;Park, Soo Yong
    • Nuclear Engineering and Technology
    • /
    • v.53 no.2
    • /
    • pp.484-497
    • /
    • 2021
  • This paper proposes an extended time uncertainty analysis approach in Level 2 human reliability analysis (HRA) considering severe accident management (SAM) strategies. The method is a time-based model that classifies two time distribution functions-time required and time available-to calculate human failure probabilities from delayed action when implementing SAM strategies. The time required function can be obtained by the combination of four time factors: 1) time for diagnosis and decision by the technical support center (TSC) for a given strategy, 2) time for strategy implementation mainly by the local emergency response organization (ERO), 3) time to verify the effectiveness of the strategy and 4) time for portable equipment transport and installation. This function can vary depending on the given scenario and includes a summation of lognormal distributions and a choice regarding shifting the distribution. The time available function can be obtained via thermal-hydraulic code simulation (MAAP 5.03). The proposed approach was applied to assess SAM strategies that use portable equipment and safety depressurization system valves in a total loss of component cooling water event that could cause reactor vessel failure. The results from the proposed method are more realistic (i.e., not conservative) than other existing methods in evaluating SAM strategies involving the use of portable equipment.

Generalized Distributed Multiple Turbo Coded Cooperative Differential Spatial Modulation

  • Jiangli Zeng;Sanya Liu;Hui Wang
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.17 no.3
    • /
    • pp.999-1021
    • /
    • 2023
  • Differential spatial modulation uses the antenna index to transmit information, which improves the spectral efficiency, and completely bypasses any channel side information in the recommended setting. A generalized distributed multiple turbo coded-cooperative differential spatial modulation based on distributed multiple turbo code is put forward and its performances in Rayleigh fading channels is analyzed. The generalized distributed multiple turbo coded-cooperative differential spatial modulation scheme is a coded-cooperation communication scheme, in which we proposed a new joint parallel iterative decoding method. Moreover, the code matched interleaver is considered to be the best choice for the generalized multiple turbo coded-cooperative differential spatial modulation schemes, which is the key factor of turbo code. Monte Carlo simulated results show that the proposed cooperative differential spatial modulation scheme is better than the corresponding non-cooperative scheme over Rayleigh fading channels in multiple input and output communication system under the same conditions. In addition, the simulation results show that the code matched interleaver scheme gets a better diversity gain as compared to the random interleaver.

Exploring modern machine learning methods to improve causal-effect estimation

  • Kim, Yeji;Choi, Taehwa;Choi, Sangbum
    • Communications for Statistical Applications and Methods
    • /
    • v.29 no.2
    • /
    • pp.177-191
    • /
    • 2022
  • This paper addresses the use of machine learning methods for causal estimation of treatment effects from observational data. Even though conducting randomized experimental trials is a gold standard to reveal potential causal relationships, observational study is another rich source for investigation of exposure effects, for example, in the research of comparative effectiveness and safety of treatments, where the causal effect can be identified if covariates contain all confounding variables. In this context, statistical regression models for the expected outcome and the probability of treatment are often imposed, which can be combined in a clever way to yield more efficient and robust causal estimators. Recently, targeted maximum likelihood estimation and causal random forest is proposed and extensively studied for the use of data-adaptive regression in estimation of causal inference parameters. Machine learning methods are a natural choice in these settings to improve the quality of the final estimate of the treatment effect. We explore how we can adapt the design and training of several machine learning algorithms for causal inference and study their finite-sample performance through simulation experiments under various scenarios. Application to the percutaneous coronary intervention (PCI) data shows that these adaptations can improve simple linear regression-based methods.

Bayesian Conway-Maxwell-Poisson (CMP) regression for longitudinal count data

  • Morshed Alam ;Yeongjin Gwon ;Jane Meza
    • Communications for Statistical Applications and Methods
    • /
    • v.30 no.3
    • /
    • pp.291-309
    • /
    • 2023
  • Longitudinal count data has been widely collected in biomedical research, public health, and clinical trials. These repeated measurements over time on the same subjects need to account for an appropriate dependency. The Poisson regression model is the first choice to model the expected count of interest, however, this may not be an appropriate when data exhibit over-dispersion or under-dispersion. Recently, Conway-Maxwell-Poisson (CMP) distribution is popularly used as the distribution offers a flexibility to capture a wide range of dispersion in the data. In this article, we propose a Bayesian CMP regression model to accommodate over and under-dispersion in modeling longitudinal count data. Specifically, we develop a regression model with random intercept and slope to capture subject heterogeneity and estimate covariate effects to be different across subjects. We implement a Bayesian computation via Hamiltonian MCMC (HMCMC) algorithm for posterior sampling. We then compute Bayesian model assessment measures for model comparison. Simulation studies are conducted to assess the accuracy and effectiveness of our methodology. The usefulness of the proposed methodology is demonstrated by a well-known example of epilepsy data.

Choice of Efficient Sampling Rate for GNSS Signal Generation Simulators

  • Jinseon Son;Young-Jin Song;Subin Lee;Jong-Hoon Won
    • Journal of Positioning, Navigation, and Timing
    • /
    • v.12 no.3
    • /
    • pp.237-244
    • /
    • 2023
  • A signal generation simulator is an economical and useful solution in Global Navigation Satellite System (GNSS) receiver design and testing. A software-defined radio approach is widely used both in receivers and simulators, and its flexible structure to adopt to new signals is ideally suited to the testing of a receiver and signal processing algorithm in the signal design phase of a new satellite-based navigation system before the deployment of satellites in space. The generation of highly accurate delayed sampled codes is essential for generating signals in the simulator, where its sampling rate should be chosen to satisfy constraints such as Nyquist criteria and integer and non-commensurate properties in order not to cause any distortion of original signals. A high sampling rate increases the accuracy of code delay, but decreases the computational efficiency as well, and vice versa. Therefore, the selected sampling rate should be as low as possible while maintaining a certain level of code delay accuracy. This paper presents the lower limits of the sampling rate for GNSS signal generation simulators. In the simulation, two distinct code generation methods depending on the sampling position are evaluated in terms of accuracy versus computational efficiency to show the lower limit of the sampling rate for several GNSS signals.

A Research Program for Modeling Strategic Aspects of International Container Port Competition

  • Anderson, Christopher M.;Luo, Meifeng;Chang, Young-Tae;Lee, Tae-Woo;Grigalunas, Thomas A.
    • Proceedings of the Korea Port Economic Association Conference
    • /
    • 2006.08a
    • /
    • pp.1-12
    • /
    • 2006
  • As national economies globalize, demand for intercontinental container shipping services is growing rapidly, providing a potential economic boon for the countries and communities that provide port services. On the promise of profits, many governments are investing heavily in port infrastructure, leading to a possible glut in port capacity, driving down prices for port services and eliminating profits as ports compete for business. Further, existing ports are making strategic investments to protect their market share, increasing the chance new ports will be overcapitalized and unprofitable. Governments and port researchers need a tool for understanding how local competition in their region will affect demand for port services at their location, and thus better assess the profitability of a prospective port. We propose to develop such a tool by extending our existing simulation model of global container traffic to incorporate demand-side shipper preferences and supply-side strategic responses by incumbent ports to changes in the global port network, including building new ports, scaling up existing ports, and unexpected port closures. We will estimate shipper preferences over routes, port attributes and port services based on US and international shipping data, and redesign the simulation model to maximize the shipper's revealed preference functions rather than simply minimize costs. As demand shifts, competing ports will adjust their pricing (short term) and infrastructure (long term) to remain competitive or defend market share, a reaction we will capture with a game theoretic model of local monopoly that will predict changes in port characteristics. The model's hypotheses will be tested in a controlled laboratory experiment tailored to local port competition in Asia, which will also serve to demonstrate the subtle game theoretic concepts of imperfect competition to a policy and industry audience. We will apply the simulation model to analyze changes in global container traffic in three scenarios: addition of a new large port in the US, extended closure of an existing large port in the US, and cooperative and competitive port infrastructure development among Korean partner countries in Asia.

  • PDF

Dynamics of Technology Adoption in Markets Exhibiting Network Effects

  • Hur, Won-Chang
    • Asia pacific journal of information systems
    • /
    • v.20 no.1
    • /
    • pp.127-140
    • /
    • 2010
  • The benefit that a consumer derives from the use of a good often depends on the number of other consumers purchasing the same goods or other compatible items. This property, which is known as network externality, is significant in many IT related industries. Over the past few decades, network externalities have been recognized in the context of physical networks such as the telephone and railroad industries. Today, as many products are provided as a form of system that consists of compatible components, the appreciation of network externality is becoming increasingly important. Network externalities have been extensively studied among economists who have been seeking to explain new phenomena resulting from rapid advancements in ICT (Information and Communication Technology). As a result of these efforts, a new body of theories for 'New Economy' has been proposed. The theoretical bottom-line argument of such theories is that technologies subject to network effects exhibit multiple equilibriums and will finally lock into a monopoly with one standard cornering the entire market. They emphasize that such "tippiness" is a typical characteristic in such networked markets, describing that multiple incompatible technologies rarely coexist and that the switch to a single, leading standard occurs suddenly. Moreover, it is argued that this standardization process is path dependent, and the ultimate outcome is unpredictable. With incomplete information about other actors' preferences, there can be excess inertia, as consumers only moderately favor the change, and hence are themselves insufficiently motivated to start the bandwagon rolling, but would get on it once it did start to roll. This startup problem can prevent the adoption of any standard at all, even if it is preferred by everyone. Conversely, excess momentum is another possible outcome, for example, if a sponsoring firm uses low prices during early periods of diffusion. The aim of this paper is to analyze the dynamics of the adoption process in markets exhibiting network effects by focusing on two factors; switching and agent heterogeneity. Switching is an important factor that should be considered in analyzing the adoption process. An agent's switching invokes switching by other adopters, which brings about a positive feedback process that can significantly complicate the adoption process. Agent heterogeneity also plays a important role in shaping the early development of the adoption process, which has a significant impact on the later development of the process. The effects of these two factors are analyzed by developing an agent-based simulation model. ABM is a computer-based simulation methodology that can offer many advantages over traditional analytical approaches. The model is designed such that agents have diverse preferences regarding technology and are allowed to switch their previous choice. The simulation results showed that the adoption processes in a market exhibiting networks effects are significantly affected by the distribution of agents and the occurrence of switching. In particular, it is found that both weak heterogeneity and strong network effects cause agents to start to switch early and this plays a role of expediting the emergence of 'lock-in.' When network effects are strong, agents are easily affected by changes in early market shares. This causes agents to switch earlier and in turn speeds up the market's tipping. The same effect is found in the case of highly homogeneous agents. When agents are highly homogeneous, the market starts to tip toward one technology rapidly, and its choice is not always consistent with the populations' initial inclination. Increased volatility and faster lock-in increase the possibility that the market will reach an unexpected outcome. The primary contribution of this study is the elucidation of the role of parameters characterizing the market in the development of the lock-in process, and identification of conditions where such unexpected outcomes happen.

Evaluating Choice Attributes of Korean Ginseng Chicken Soup as a Home Meal Replacement (HMR) Product Using Conjoint Analysis: A Case Study of Singapore Market (컨조인트 분석을 이용한 삼계탕 간편가정식의 선택속성 분석: 싱가포르 시장을 중심으로)

  • Kim, Eun-Mi;Ahn, Jee-Ahe;Lee, Ho-Jin;Lee, Min-A
    • Korean journal of food and cookery science
    • /
    • v.32 no.5
    • /
    • pp.609-618
    • /
    • 2016
  • Purpose: The purpose of this study was to analyze the attributes considered important by Singaporeans in the selection of Korean ginseng chicken soup as an HMR product using conjoint analysis techniques. Methods: A total of 400 questionnaires were distributed to local consumers in April 2012, of which 324 were completed (81.0%). Statistical analyses of data were performed using SPSS/Windows 18.0 for descriptive statistics and conjoint analysis. Results: Analysis of the attributes and levels of Korean ginseng chicken soup as an HMR product for people who lived in Singapore showed the relative importance of each attribute as follows: packing (32.4%), chicken (32.1%), glutinous rice (13.8%), soup (11.6%), and ginseng (10.0%). Results showed that Singaporean consumers preferred code J's Korean ginseng chicken soups as an HMR product, which consisted of half a chicken, glutinous rice, a whole ginseng root in a soy sauce-based soup, and a partially transparent package. The most preferred Korean ginseng chicken soup gained 50.4% potential market share from choice simulation when compared with the second preferred one. Conclusion: This study has significance in that such a practical research contributes to product development of a specific Korean dish for foreign consumers. In addition, the results of this study provide useful information for the food industry for global expansion and commercialization of Korean food, thereby providing an important foundation for future development of various Korean foods as HMR products.