TY - JOUR

T1 - An optimal transmission strategy in zero-sum matrix games under intelligent jamming attacks

AU - Arunthavanathan, Senthuran

AU - Goratti, Leonardo

AU - Maggi, Lorenzo

AU - de Pellegrini, Francesco

AU - Kandeepan, Sithamparanathan

AU - Reisenfeld, Sam

PY - 2019/5

Y1 - 2019/5

N2 - Cognitive radio networks are more susceptible to jamming attacks due to the nature of unlicensed users accessing the spectrum by performing dynamic spectrum access. In such a context, a natural concern for operators is the resilience of the system. We model such a scenario as one of adversity in the system consisting of a single legitimate (LU) pair and malicious user (MU). The aim of the LU is to maximize throughput of transmissions, while the MU is to minimize the throughput of the LU completely. We present the achievable transmission rate of the LU pair under jamming attacks taking into account mainly on the transmission power per channel. Furthermore, we embed our utility function in a zero-sum matrix game and extend this by employing a fictitious play when both players learn each other's strategy over time, e.g., such an equilibrium becomes the system's global operating point. We further extend this to a reinforcement learning (RL) approach, where the LU is given the advantage of incorporating RL methods to maximize its throughput for fixed jamming strategies.

AB - Cognitive radio networks are more susceptible to jamming attacks due to the nature of unlicensed users accessing the spectrum by performing dynamic spectrum access. In such a context, a natural concern for operators is the resilience of the system. We model such a scenario as one of adversity in the system consisting of a single legitimate (LU) pair and malicious user (MU). The aim of the LU is to maximize throughput of transmissions, while the MU is to minimize the throughput of the LU completely. We present the achievable transmission rate of the LU pair under jamming attacks taking into account mainly on the transmission power per channel. Furthermore, we embed our utility function in a zero-sum matrix game and extend this by employing a fictitious play when both players learn each other's strategy over time, e.g., such an equilibrium becomes the system's global operating point. We further extend this to a reinforcement learning (RL) approach, where the LU is given the advantage of incorporating RL methods to maximize its throughput for fixed jamming strategies.

KW - Anti-jamming game

KW - Zero-sum games

KW - Reinforcement learning

KW - Fictitious play

UR - http://www.scopus.com/inward/record.url?scp=85037333517&partnerID=8YFLogxK

U2 - 10.1007/s11276-017-1629-4

DO - 10.1007/s11276-017-1629-4

M3 - Article

SN - 1022-0038

VL - 25

SP - 1777

EP - 1789

JO - Wireless Networks

JF - Wireless Networks

IS - 4

ER -