• Title/Summary/Keyword: game strategy

Search Result 510, Processing Time 0.029 seconds

A Study on Prediction of Baseball Game Based on Linear Regression

  • LEE, Kwang-Keun;HWANG, Seung-Ho
    • Korean Journal of Artificial Intelligence
    • /
    • v.7 no.2
    • /
    • pp.13-17
    • /
    • 2019
  • Currently, the sports market continues to grow every year, and among them, professional baseball's entry income is larger than the rest of the professional league. In sports, strategies are used differently in different situations, and the analysis is based on data to decide which direction to implement. There is a part that a person misses in an analysis, and there is a possibility of a false analysis by subjective judgment. So, if this data analysis is done through artificial intelligence, the objective analysis is possible, and the strategy can be more rationalized, which helps to win the game. The most popular baseball to be applied to artificial intelligence to analyze athletes' strengths and weaknesses and then efficiently establish strategies to ease the competition. The data applied to the experiment were provided on the KBO official website, and the algorithms for forecasting applied linear regression. The results showed that the accuracy was 87%, and the standard error was ±5. Although the results of the experiment were not enough data, it would be possible to effectively use baseball strategies and predict the results of the game if the amount of data and regular data can be applied in the future.

An Oligopoly Spectrum Pricing with Behavior of Primary Users for Cognitive Radio Networks

  • Lee, Suchul;Lim, Sangsoon;Lee, Jun-Rak
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.8 no.4
    • /
    • pp.1192-1207
    • /
    • 2014
  • Dynamic spectrum sharing is a key technology to improve spectrum utilization in wireless networks. The elastic spectrum management provides a new opportunity for licensed primary users and unlicensed secondary users to efficiently utilize the scarce wireless resource. In this paper, we present a game-theoretic framework for dynamic spectrum allocation where the primary users rent the unutilized spectrum to the secondary users for a monetary profit. In reality, due to the ON-OFF behavior of the primary user, the quantity of spectrum that can be opportunistically shared by the secondary users is limited. We model this situation with the renewal theory and formulate the spectrum pricing scheme with the Bertrand game, taking into account the scarcity of the spectrum. By the Nash-equilibrium pricing scheme, each player in the game continually converges to a strategy that maximizes its own profit. We also investigate the impact of several properties, including channel quality and spectrum substitutability. Based on the equilibrium analysis, we finally propose a decentralized algorithm that leads the primary users to the Nash-equilibrium, called DST. The stability of the proposed algorithm in terms of convergence to the Nash equilibrium is also studied.

Opportunistic Spectrum Access with Discrete Feedback in Unknown and Dynamic Environment:A Multi-agent Learning Approach

  • Gao, Zhan;Chen, Junhong;Xu, Yuhua
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.9 no.10
    • /
    • pp.3867-3886
    • /
    • 2015
  • This article investigates the problem of opportunistic spectrum access in dynamic environment, in which the signal-to-noise ratio (SNR) is time-varying. Different from existing work on continuous feedback, we consider more practical scenarios in which the transmitter receives an Acknowledgment (ACK) if the received SNR is larger than the required threshold, and otherwise a Non-Acknowledgment (NACK). That is, the feedback is discrete. Several applications with different threshold values are also considered in this work. The channel selection problem is formulated as a non-cooperative game, and subsequently it is proved to be a potential game, which has at least one pure strategy Nash equilibrium. Following this, a multi-agent Q-learning algorithm is proposed to converge to Nash equilibria of the game. Furthermore, opportunistic spectrum access with multiple discrete feedbacks is also investigated. Finally, the simulation results verify that the proposed multi-agent Q-learning algorithm is applicable to both situations with binary feedback and multiple discrete feedbacks.

Honeypot game-theoretical model for defending against APT attacks with limited resources in cyber-physical systems

  • Tian, Wen;Ji, Xiao-Peng;Liu, Weiwei;Zhai, Jiangtao;Liu, Guangjie;Dai, Yuewei;Huang, Shuhua
    • ETRI Journal
    • /
    • v.41 no.5
    • /
    • pp.585-598
    • /
    • 2019
  • A cyber-physical system (CPS) is a new mechanism controlled or monitored by computer algorithms that intertwine physical and software components. Advanced persistent threats (APTs) represent stealthy, powerful, and well-funded attacks against CPSs; they integrate physical processes and have recently become an active research area. Existing offensive and defensive processes for APTs in CPSs are usually modeled by incomplete information game theory. However, honeypots, which are effective security vulnerability defense mechanisms, have not been widely adopted or modeled for defense against APT attacks in CPSs. In this study, a honeypot game-theoretical model considering both low- and high-interaction modes is used to investigate the offensive and defensive interactions, so that defensive strategies against APTs can be optimized. In this model, human analysis and honeypot allocation costs are introduced as limited resources. We prove the existence of Bayesian Nash equilibrium strategies and obtain the optimal defensive strategy under limited resources. Finally, numerical simulations demonstrate that the proposed method is effective in obtaining the optimal defensive effect.

FTAs for Global Free Trade: Through Trade Liberalization Game

  • Nahm, Sihoon
    • Journal of Korea Trade
    • /
    • v.26 no.1
    • /
    • pp.33-56
    • /
    • 2022
  • Purpose - This paper explains how free trade agreements (FTAs) work as a building block to achieve global free trade and be better than other trade regimes. Design/methodology - This paper utilizes a trade liberalization game setup. Three countries choose a trade agreement strategy based on a given trade regime. Trade agreement is made only when all member countries agree. The paper evaluates each trade regime concerning FTAs and customs union (CU) by area size of global free trade equilibrium on the technology or demand gap between countries. Findings - FTAs make global free trade easier. In this game, there are two main reasons for failure to reach global free trade. First, a trade regime with FTAs makes non-member face difficulties in refusing trade agreements in the existence of a technology gap than a trade regime without FTAs. Also, a trade regime with FTAs causes it harder to exclude non-members in the existence of a demand gap than a trade regime with only CUs. Therefore, a trade regime with FTAs can work better in reaching global free trade. Originality/value - The concept of "implicit coordination" was used, which assumes that FTA members keep external tariffs for non-members the same as before an FTA. Without this consideration, FTA members lower their tariffs to non-members, and it makes non-member refuse free trade easier. FTA can prevent it sufficiently only with implicit coordination. This makes the trade regime with FTAs more effective to reach global free trade.

Theoretical Background of Games for Social Change (사회 변화를 촉구하는 기능성 게임의 이론적 배경)

  • Chu, Jean Ho
    • Journal of Korea Game Society
    • /
    • v.22 no.1
    • /
    • pp.55-64
    • /
    • 2022
  • Serious games are developed under particular purpose. This study provides a theoretical background on serious games for social change as a basis for future experiments and practice. By examining the elements of games and related theories, narrative immersion and critical participation are identified as the two main experiences of games, which are realized through role play. Based on educational theories for behavioral and cognitive change, this study suggests to use role play as a strategy on serious games for social change. Analyzing the cases of serious games in Korea, this study concludes that participants are able to experience the context of the situation through role play.

'Animal Ground', Familiar UX and a New Game (친숙한 UX, 새로운 게임 '애니멀 그라운드')

  • Ahn, You Jung;Kim, Ji Sim;Kim, Kyong Ah;Jang, Jae Hun;Park, Chi Su;Son, Hui Su;Ko, Yun Su
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2020.01a
    • /
    • pp.259-260
    • /
    • 2020
  • 모바일 게임 시장 규모가 커짐에 따라 RPG, 보드게임 등 다양한 종류의 모바일 게임이 출시되고 있다. 그 중 보드게임의 경우 대부분이 과도한 과금 요소, 지나치게 운에 따라 승패가 결정되어 이에 불만을 가지는 사용자들이 많다. 본 논문에서는 기존 보드게임의 형식을 탈피해 새로운 보드게임 형식을 제공하고, 운적인 요소와 과금 요소를 최소화하여 전략을 통해 승패를 가르는 모바일 기반 게임 애플리케이션을 개발하였다. 또한 보드게임에 레이싱 요소를 결합함으로써 턴 방식이 아닌 실시간 방식으로 게임을 운영하고, 게임의 긴박감을 증가시켜 색다른 경험과 재미를 제공한다.

  • PDF

The Optimum Strategy for Favorable Situation in Discrete Red & Black (이산형 적흑게임에서 유리한 경우의 최적전략)

  • 석영우;안철환
    • Journal of the military operations research society of Korea
    • /
    • v.30 no.1
    • /
    • pp.70-80
    • /
    • 2004
  • In discrete red and black, you can stake any amount s in your possession, but the value of s takes positive integer value. Suppose your goal is N and your current fortune is f, with 0<f<N. You win back your stake and as much more with probability p and lose your stake with probability, q = 1-p. In this study, we consider optimum strategies for this game with the value of p greater than $\frac{1}{2}$ where the player has the advantage over the house. The optimum strategy at any f when p>$\frac{1}{2}$ is to play timidly, which is to bet 1 all the time. This is called as Timid1 strategy. In this paper, we perform the simulation study to show that the Timid1 strategy is optimum in discrete red and black when p>\frac{1}{2}.

Optimum Strategies When p<1/2 in Discrete Red & Black (이산형 적흑게임에서 p<1/2인 경우의 최적전략)

  • Seok, Young-Woo
    • Journal of the military operations research society of Korea
    • /
    • v.31 no.1
    • /
    • pp.122-129
    • /
    • 2005
  • In discrete red and black, you can stake any amount s in your possession, but the value of s takes positive integer value. Suppose your goal is N and your current fortune is ${\Large\;f},\;with\;O<{\Large\;f}. You win back your stake and as much more with probability p and lose your stake with probability, q=1-p. In this study, we consider optimum strategies for this game with the value of p less than ${\frac{1}{2}}$ where the house has the advantage over the player. It is shown that the optimum strategy at any ${\Large\;f}$ is the DBold strategy which is to play boldly in discrete red and black when $p<{\frac{1}{2}}$. And then, we perform the simulation study to show that this strategy, which is to bet as much as you can, is optimal in discrete case.

Beamforming Games with Quantized CSI in Two-user MISO ICs (두 유저 MISO 간섭 채널에서 불완전한 채널 정보에 기반한 빔포밍 게임)

  • Lee, Jung Hoon;Lee, Jin;Ryu, Jong Yeol
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.21 no.7
    • /
    • pp.1299-1305
    • /
    • 2017
  • In this paper, we consider a beamforming game between the transmitters in a two-user multiple-input single-output interference channel using limited feedback and investigate how each transmitter is able to find a modified strategy from the quantized channel state information (CSI). In the beamforming game, each of the transmitters (i.e., a player) tries to maximize the achievable rate (i.e., a payoff function) via a proper beamforming strategy. In our case, each transmitter's beamforming strategy is represented by a linear combining factor between the maximum ratio transmission (MRT) and the zero forcing (ZF) beamforming vectors, which is the Pareto optimal achieving strategy. With the quantized CSI, the transmitters' strategies may not be valid because of the quantization errors. We propose a modified solution, which takes into account the effects of the quantization errors.