Project name: RLCard Author: Daochen Zha Open Source license: MIT Project address: gitee.com/daochenzha/…
Project introduction
RLCard is a toolkit for Reinforcement Learning (RL) for card games. It supports a variety of card game environments and has an easy-to-use interface for implementing a variety of reinforcement learning and search algorithms. The goal of RLCard is to bridge the gap between reinforcement learning and incomplete information games. RLCard was developed by DATA Lab at Texas A&M University and community contributors.
Project presentations
In addition to Doudidir, it also supports a variety of card game environments, including blackjack, Texas Hold ’em, UNO, mahjong, and more.
The sample
Interpretation of the case
Here’s a small example
import rlcard
from rlcard.agents import RandomAgent
env = rlcard.make('blackjack')
env.set_agents([RandomAgent(num_actions=env.num_actions)])
print(env.num_actions) # 2
print(env.num_players) # 1
print(env.state_shape) # [[2]]
print(env.action_shape) # [None]
trajectories, payoffs = env.run()
Copy the code
RLCard can flexibly connect various algorithms, as shown in the following example:
- Test random agents
- Deep Q learning on Blackjack
- Training CFR(chance sampling) on Leduc Hold ’em
- Play with pre-trained Leduc models
- Training DMC in Doudizhu
- Evaluating agents
Support algorithm
algorithm | Interpretation of the case | reference |
---|---|---|
Deep Mont-Carlo (DMC) | examples/run_dmc.py | [d] |
Deep Q Learning (DQN) | examples/run_rl.py | [d] |
Neural Fictitious self-play (NFSP) | examples/run_rl.py | [d] |
Factual Regret Minimization (CFR) | examples/run_cfr.py | [d] |
If you want to give the kit a try, click on the link below to the project home page: gitee.com/daochenzha/…