“Sampling” refers to generating random numbers from some probability distributions. This paper introduces how to sample from non-standard probability distributions and how to calculate the corresponding probability of samples in the distribution.
introduce
Often, in computer science and engineering, we encounter problems that cannot be solved using equations. These problems often involve complex systems, noisy inputs, or both. Here are some examples of real-world problems with no precise analytic solution:
-
You have built a computer model of the plane and want to determine how well the plane can withstand different weather conditions.
-
You want to determine whether chemical runoff from the proposed plant will affect the water supply for nearby residents based on a groundwater diffusion model.
-
You have a robot that captures noisy images from its camera and wants to restore the three-dimensional structure of the objects depicted in those images.
-
You want to calculate how likely you are to win at chess if you take certain actions.
Although these types of problems cannot be solved precisely, they can often be approximated using techniques called Monte Carlo methods. In the Monte Carlo method, the key idea is to take many samples and then you can estimate the solution.
Monte Carlo method (Monte Carlo method), also known as statistical simulation method, is widely used in physics, chemistry, economics and information technology fields through repeated random sampling to simulate the probability and statistics of objects. Reject sampling is a random sampling method for complex problems.
Monte Carlo concept
Monte Carlo method is a method of approximate inference, through the method of sampling a large number of particles to solve the expectation, mean, area, integration and other problems.
Monte Carlo was originally the name of a casino, probably because the Monte Carlo method is a method of random simulation, much like the process of rolling dice in a casino. The earliest Monte Carlo methods were for summation or integration problems that were difficult to solve.
For example, the figure below is a classic problem of calculating PI with Monte Carlo. The computer randomly generates points in a square and counts how many points fall in the 1/4 circle. The number of points divided by the total number of points is the area of the circle.
Accept – reject sampling
For distributions whose cumulative distribution function is unknown, we can adopt accept-reject sampling. As shown in the figure below, P (z) is the distribution we want to sample, q(z) is our proposal distribution, let Kq (z)> P (z), we first sample particles in Kq (z) according to the method of direct sampling, and then determine where the particle falls on the road. Particles falling in the gray region are rejected, and those falling in the red line are accepted, and N particles conforming to P (z) are finally obtained
What is sampling?
The term sampling means generating random values from some probability distribution. For example, the value you get by rolling a six-sided die is a sample. The card you draw from the top of the deck after shuffling is a sample. The spot where the dart hits the board is also a sample. The only difference between these different samples is that they are generated from different probability distributions. In the case of dice, the distribution places equal weights over six values. In the case of cards, equal weights are placed across 52 values. In the case of a dart board, the distribution places the weight in a circular area (although it may not be evenly distributed, depending on your skill as a dart player).
There are two ways we usually want to use samples. The first simply generates a random value for later use: in a computer game of poker, for example, a random draw. The second way to use samples is for estimation. For example, if you suspect your friend is playing a game full of dice, you might need to roll the dice several times to see if certain numbers are more frequent than you expected. Or, you might just want to describe the range of possibilities, as in the aircraft example above. Weather is a rather messy system, which means it is impossible to calculate with any accuracy whether an aircraft will survive a given weather condition. Instead, you can simulate the aircraft’s behavior in many different weather conditions multiple times, which will give you an idea of the conditions under which the aircraft is most likely to fail.
Use sample and probabilistic programming
As with most applications in computer science, you can make design decisions when programming with samples and probabilities that affect the overall cleanliness, consistency, and correctness of your code. In this chapter, we will use a simple example to show how to sample random items in a computer game. In particular, we will focus on probability-specific design decisions, including functions for sampling and evaluating probability, using logarithms, allowing reproducibility, and separating the process of generating the sample from the specific application.
A brief explanation of symbols
-
Show thatIs the value of the probability density function (PDF) or probability mass function (PMF) over the valueIs a random variable of.
-
PDF is a continuous function; And PMF is a discrete functionWhere ℤ is the set of all integers.
-
The probability distribution for darts is a continuous PDF, while the probability distribution for dice is a discrete PMF. In both cases,.
-
Given a value, we might want to evaluate what the probability density (or mass) is for that location. In mathematical notation, we’ll write it as theta 。
-
Given PDF or PMF, we might also want to sample a valueIn a way that is proportional to the distribution (so that we are more likely to get samples from places with higher probability). In mathematical notation, we’ll write it as theta, indicating thatSample proportional
Sampling magic items (example)
Let’s say we’re writing a role-playing game (RPG) and we want a way to generate bonus statistics for magic items that a monster drops randomly. We want the maximum reward for a project to be +5 and the higher reward to be less likely than the lower reward. ifIs the random variable of bonus value, then:
We can also specify that our bonuses should be divided among six attributes (Agility, constitution, Strength, intelligence, intelligence, and charm). Thus, projects with +5 rewards can distribute these points across different attributes (e.g., +2 wisdom and +3 intelligence) or concentrate them in one attribute (e.g., +5 charisma).
How are we going to randomly sample from this distribution? The simplest approach might be to sample the overall project prize first, and then sample the way the prize is distributed in the statistics. Conveniently, the probability distribution of bonuses and how they are distributed are examples of polynomial distributions.
Multinomial distribution
Definition: When you have multiple possible outcomes and you want to represent the probability of each of these outcomes occurring. Classic examples used to explain polynomial distributions are Ball and URN. The idea is that you have an urn with balls of different colors (for example, 30% red, 20% blue, and 50% green). You take out a ball, note its color, put it back in the urn, and repeat many times. In this case, the outcome corresponds to drawing a particular colored ball, and the probability of each outcome corresponds to the proportion of that colored ball (for example, for drawing a blue ball, the probability is. The multinomial distribution is then used to describe the possible combination of results when multiple balls are drawn (for example, two greens and one blue).
Writing MultinomialDistribution
In general, there are two use cases for distributions: we might want to sample from that distribution, and we might want to evaluate the probability of one (or more) samples under the PMF or PDF of that distribution. Although the actual calculations required to perform these two functions are quite different, they depend on one common piece of information: what are the parameters of the distribution. In the case of multinomial distributions, the parameter is the probability of events,(Corresponding to the ratio of different colored balls in the urn example above)
The simplest solution is to simply create two functions that take the same parameters but are otherwise independent. However, I usually choose to use a class to represent my distribution. This has several benefits:
-
When creating a class, you only need to pass in parameters once.
-
There are other properties we might want to know about distributions: mean, variance, derivative, and so on. Once we have some functions that operate on common objects, it is more convenient to use the analogy of transfer functions. Same parameters for many different functions.
-
It is usually a good idea to check that parameter values are valid (for example, in the case of polynomial distributions, vectorsThe sum of the event probabilities should be 1). It is much more efficient to perform this check once in the constructor of a class, rather than once every time one of the functions is called.
-
Sometimes calculating PMF or PDF involves calculating constant values (given parameters). With classes, we can pre-evaluate these constants in the constructor instead of having to evaluate them every time a PMF or PDF function is called.
In practice, this is how many statistics packages work, including SciPy’s own distribution, which is located in the Scipy.stats module. However, when we use other SciPy functions, we do not use their probability distributions, both for illustration and because there are currently no polynomial distributions in SciPy.
Here is the constructor code for the class:
import numpy as np from scipy.special import gammaln class MultinomialDistribution(object): Def __init__ (self, p, rso = np. The random) : "" "the person that polynomial type parameters -- -- -- -- -- -- -- -- -- -- p: entities k array, said total probability rso: # check if the total probability is 1, if not, throw error if not np.isclose(np.sum(p), 1.0): Raise ValueError("outcome probabilities do not sum to 1") raise ValueError("outcome probabilities do not sum to 1") Precompute the logarithmic probability, (function 'np.log' operates on elements on array P) self.logp = np.log(self.p)Copy the code
Before we move on to the rest of the course, let’s review two points related to constructors.
Descriptive and mathematical variable names
In general, programmers are encouraged to use descriptive variable names: for example, use the name independent_variableanddependent_variable instead of X. A standard rule of thumb is never to use variable names that are only one or two characters long. However, you will notice that in the constructor of our MultinomialDistribution class, we use the variable name p, which violates the typical naming convention.
While I agree that this naming convention should apply to almost every field, there is one exception: mathematics. The difficulty with writing mathematical equations is that their variable names are usually just one letter:So, if you convert them directly to code, the simplest variable name isand. Obviously, these are not the most informative variable names (Doesn’t convey much information), but having more descriptive variable names also makes switching between code and equations more difficult.
I believe that when writing code that directly implements an equation, you should use the same variable names as in the equation. This makes it easy to see which parts of the code are implementing which parts of the equation. Of course, this makes the code harder to understand in isolation, so it’s especially important that comments explain the goals of various calculations well. If the equation is listed in the academic paper, the annotation should refer to the equation number for easy lookup.
Import the NumPy
We imported the Numpy module as NP. This is standard practice in numerical computing because Numpy provides a large number of useful functions, many of which can even be used in a single file.
import numpy as np
Copy the code
Sampling from a multinomial distribution
Multinomial NumPy provides us with a function: NP.random. Multinomial around which we can make some design decisions.
Seed the random number generator
Although we do want to take a random sample, sometimes we want our results to be repeatable: even if the numbers look random, if we run the program again, we might want it to use the same “random” sequence of numbers.
To allow the generation of such “repeatable random” numbers, we need to tell our sampling function how to generate random numbers. We can do this by using the NumPyRandomState object, which is essentially a random number generator object that can be passed around. It has most of the same functionality np.random; The difference is that we can control where the random numbers come from. We create it as follows:
>>> import numpy as np
>>> rso = np.random.RandomState(230489)
Copy the code
The numbers passed to the RandomState constructor are the seeds of the random number generator. As long as we instantiate it with the same seed, a RandomState object will produce the same “random” numbers in the same order, ensuring replicability:
>>> rso.rand() 0.5356709186237074 >>> rso.rand() 0.6190581888276206 >>> rso.rand() 0.23143573416770336 >>> Rso.seed (230489) >>> rso.rand() 0.5356709186237074 >>> rso.rand() 0.6190581888276206Copy the code
Earlier, we saw that the constructor takes a parameter named Rso. The RSO variable is an object that has already been initialized with RandomState. I like the RandomState object as an optional parameter: it’s convenient not to force it occasionally, but I do want to have the option to use it (I wouldn’t be able to do that if I only used the NP.Random module).
Therefore, if no variable is given by rSO, the constructor defaults to NP.random. Multinomial. Otherwise, it uses a polynomial sampler from the RandomState object itself.
What are parameters?
Def sample(self, n): """ extract a sample of "n" events from the multinomial distribution with a result probability of "self.p". Parameters ---------- n: The number of samples Returns ------- holds an array of k, the number of sampling occurrences per result NumPy provides us with a function: Multinomial = multinomial(n, self.p) return xCopy the code
Compute the polynomial PMF
Although we don’t need to explicitly calculate the probability of the magic item we generate, it’s almost always a good idea to write a function that calculates the probability mass function (PMF) or probability density function (PDF) of the distribution. Why is that?
- We can test with it: if we take a lot of samples with our sampling function, they should approximate accurate PDF or PMF. If, after many samples, the approximation is poor or clearly wrong, then we know there is an error somewhere in our code.
- You often need it later. For example, we might want to categorize randomly generated items as Common, Uncommon, and rare, depending on how likely they are to be generated. To determine this, we need to be able to calculate PMF.
Polynomial PMF equations
Formally, the multinomial distribution has the following equation:
\
whenIt’s a length vectorSpecify the number of times each event occurs, andIs a vector that specifies the probability of each event occurring.
The factorial in the above equation can actually be expressed as a special function,Is called the gamma function. When we start writing code, it’s more convenient and efficient to use gamma functions instead of factorials, so we’ll use:
Using log values
In a computer, data is stored in binary form, and overflow occurs when data exceeds the maximum storage capacity of the computer. When the value passed in is a very small negative value, an underflow occurs and the output is 0.
Before getting into the actual code required to implement the above equations, I want to highlight one of the most important design decisions when writing probabilistic code: the use of log values. That means, rather than using probability directlyWe should use logarithmic probability, logarithmThis is because the probability can quickly become very small, leading to underflow errors.
To trigger this, consider that the probability must be between 0 and 1 inclusive. NumPy has a useful function, finfo, that tells us the limits of floating point values in the system. For example, on a 64-bit machine, we see that the smallest positive number available (given tiny) is
>>> import numpy as NP >>> np.finfo(float). Tiny 2.2250738585072014E-308Copy the code
While that may seem small, it’s not uncommon to encounter this order of magnitude or less. Also, multiplying by probability is a common operation, but if we try to do it with very small probabilities, we run into an underflow problem:
>>> tiny = np.finfo(float).tiny
>>> # if we multiply numbers that are too small, we lose all precision
>>> tiny * tiny
0.0
Copy the code
However, taking logarithms helps alleviate this problem because we can use logarithms to represent a wider range of numbers than is usually the case. Formally, the log range is zero. In practice, they range from the value returned by min (the smallest number finFO can represent) to zero. The min value is much smaller than the logarithmic tiny of the value (which would be our lower bound if we were not working in a logarithmic space) :
>>> # this is our lower bound when using >>> Np.log (tiny) -708.39641853226408 >>> # this is our lower bound when using Logs > > > np. Finfo (float). Min - 1.7976931348623157 e+308Copy the code
Thus, by using logarithms, we can greatly expand the range of representable numbers. In addition, we can multiply logarithms by addition becauseTherefore, if we multiply the above with logarithms, we do not have to worry about the loss of accuracy due to underflow:
>>> # the result of multiplying small probabilities >>> np.log(tiny * tiny) -inf >>> # the result of adding small log probabilities >>> Np.log (tiny) + Np.log (tiny) -1416.7928370645282Copy the code
Of course, this solution is no panacea. If we need to derive numbers from logarithms (for example, adding the probabilities instead of multiplying them), then we return to underflow:
> > > tiny tiny * 0.0 > > > np. J exp (np) log (tiny) + np) log (tiny) 0.0Copy the code
Still, using logarithms to do all the calculations saves a lot of trouble. If we needed to go back to the original numbers, we might be forced to lose this precision, but we retain at least some information about the probabilities — enough to compare them, for example — that would otherwise be lost.
Write PMF code
Now that we’ve seen the importance of using logarithms, we can actually write our function to compute log-pmF
Def log_pmf(self, x): """ Computs the logarithmic probability mass function of the polynomial (log-pmf) with the resulting probability of "self.p", using "x". Parameters ---------- x: Return ------- The Evaluated log-pmf for draw 'x' "" # Get The total number of events n = np.sum(x) # equivalent to log(n!) log_n_factorial = gammaln(n + 1) # equivalent to log(x1! *... * xk!) Sum_log_xi_factorial = np.sum(gammaln(x + 1)) # if self.p is 0, then self.logp is -inf Multiplying log_PI_xi = 0 THEN them together will give nan, but # we want it to just be 0. log_pi_xi = self.logp * x log_pi_xi[x == 0] = 0 # equivalent to log(p1^x1 * ... * pk^xk) sum_log_pi_xi = np.sum(log_pi_xi) # Put it all together log_pmf = log_n_factorial - sum_log_xi_factorial + sum_log_pi_xi return log_pmfCopy the code
In most cases, this is an implementation of the polynomial PMF equation described above. The gammaln function comes from scipy. Special and computes the logarithmic gamma function,As mentioned above, it is more convenient to use the gamma function than the factorial function; That’s because SciPy gives us a log-gamma function, not log-factorial. We can calculate a logarithmic factorial ourselves, using something like this:
log_n_factorial = np.sum(np.log(np.arange(1, n + 1)))
sum_log_xi_factorial = np.sum([np.sum(np.log(np.arange(1, i + 1))) for i in x])
Copy the code
But if we use SciPy’s built-in gamma function, it’s easier to understand, easier to code, and more computatively efficient.
We need to solve an edge case: when, thenThis is fine, except for the following behavior when infinity is multiplied by zero:
>>> # it's fine to multiply infinity by integers... >>> -np.inf * 2.0 - INF >>> #... But things break when we try to multiply by zero >>> -Np. INF * 0.0 nan # Is not a numberCopy the code
Nan means “not a number,” and it’s almost always a pain to deal with, because most calculations will result in nan producing another nan. So if we don’t deal with this situationand, we will get a nan. This will add up with other numbers to produce another nan, which is useless. To solve this problem, we specifically examine the following situationsAnd set the result.
Let’s go back to our discussion of using logarithms for a moment. Even if we really only need PMF, and not log-PMF, it’s usually better to compute it logarithmically first and then exponentiate it as needed.
def pmf(self, x):
"""Evaluates the probability mass function (PMF) of a multinomial
with outcome probabilities `self.p` for a draw `x`.
Parameters
----------
x: numpy array of length `k`
The number of occurrences of each outcome
Returns
-------
The evaluated PMF for draw `x`
"""
pmf = np.exp(self.log_pmf(x))
return pmf
Copy the code
Test, execute.
MultinomialDistribution(np.array([0.25, 0.25, 0.25, 0.25])) >>> Dist. Log_pmf (np.array([1000, 0, 0,)) 0]) -1386.2943611198905 >>> Dist. Log_pmf (Np.array ([999, 0, 0, 0]) -1384.9080667587707Copy the code
Sample magic items and revisit them
Now that we have written a number of functions, we can use them to generate our magic items. To do this, we’ll create a class called MagicItemDistribution, located in the file rpg.py:
lass MagicItemDistribution(object): # these are the names (and order) of the stats that all MAGICAL # items will have five attributes stats_names = ("dexterity", "constitution", "strength", "intelligence", "wisdom", "charisma") def __init__(self, bonus_probs, stats_probs, rso=np.random): """Initialize a magic item distribution parameterized by `bonus_probs` and `stats_probs`. Parameters ---------- bonus_probs: Numpy array of length m bonus probabilities The probabilities of The overall bonuses. Each index in The array pile to The bonus of that amount (e.g., index 0 is +0, index 1 is +1, etc.) stats_probs: Numpy array of length 6 The probabilities of how The overall bonus is distributed among The different stats. `stats_probs[i]` corresponds to the probability of giving a bonus point to the ith stat; i.e., the value at `MagicItemDistribution.stats_names[i]`. rso: numpy RandomState object (default: Random) The random number generator """ # Create The multinomial developments we'll be using self. Bonus_dist = MultinomialDistribution(bonus_probs, rso=rso) self.stats_dist = MultinomialDistribution(stats_probs, rso=rso)Copy the code
The constructor of our MagicItemDistribution class takes the parameters of reward probability, statistical probability, and random number generator. Although we specify the reward probabilities we want above, it is usually a good idea to encode the parameters as passed in. This leaves open the possibility of sampling items under different distributions. (For example, maybe the reward probability changes as the player’s level increases.) We encode the name of the statistics as a class variable, stats_names, although that could easily be another argument to the constructor.
As mentioned earlier, there are two steps to sampling magic items: first, sample the total prize money, and then sample the distribution of the prize money in the various statistics. Therefore, we coded these steps into two methods: _sample_bonus and _sample_STATS:
Def _sample_bonus(self): """ Sample the value of the total bonus. Returns ------- INTEGER The overall Bonus Total bonus """ # The bonus is essentially just a sample from a multinomial # distribution with n=1; i.e., only one event occurs. sample = self.bonus_dist.sample(1) # `sample` is an array of zeros and a single one at the # location corresponding to the bonus. We want to convert this # one into the actual value of the bonus. bonus = Np.argmax (sample) return Bonus def _samPLE_STATS (self): """ Sample total bonus and its distribution among different statistics. Returns ------- numpy array of length 6 The number of bonus points for each stat """ # First we need to sample the overall bonus bonus = self._sample_bonus() # Then, we use a different multinomial distribution to sample # how that bonus is distributed. The bonus corresponds to the # number of events. stats = self.stats_dist.sample(bonus) return statsCopy the code
We could have made them into a single method — especially since _sample_STATS is the only dependent function _sample_bonus — but I chose to separate them both because it makes the sampling routine easier to understand, and because breaking it down into smaller parts makes the code easier to test.
You’ll also notice that these methods are prefixed with an underscore, indicating that they are not actually intended to be used outside of the class. Instead, we provide the following function sample:
def sample(self):
"""Sample a random magical item.
Returns
-------
dictionary
The keys are the names of the stats, and the values are
the bonus conferred to the corresponding stat.
"""
stats = self._sample_stats()
item_stats = dict(zip(self.stats_names, stats))
return item_stats
Copy the code
The sample function is essentially the same as _sample_STATS, except that it returns a dictionary with statistical names as keys. This provides a clean and easy-to-understand interface for sampling projects — it’s obvious which statistics have how many bonus points — but it also keeps open the option of _sample_STATS if one needs to take a lot of samples and needs to be efficient.
We use a similar design to evaluate the probabilities of projects. Also, we expose the advanced method PMF, log_pmf, and generate the dictionary sample in the following form:
def log_pmf(self, item):
"""Compute the log probability of the given magical item.
Parameters
----------
item: dictionary
The keys are the names of the stats, and the values are
the bonuses conferred to the corresponding stat.
Returns
-------
float
The value corresponding to log(p(item))
"""
# First pull out the bonus points for each stat, in the
# correct order, then pass that to _stats_log_pmf.
stats = np.array([item[stat] for stat in self.stats_names])
log_pmf = self._stats_log_pmf(stats)
return log_pmf
def pmf(self, item):
"""Compute the probability the given magical item.
Parameters
----------
item: dictionary
The keys are the names of the stats, and the values are
the bonus conferred to the corresponding stat.
Returns
-------
float
The value corresponding to p(item)
"""
return np.exp(self.log_pmf(item))
Copy the code
These methods rely on _stats_log_PMF, which calculates the probability of statistics (but it uses arrays instead of dictionaries) :
def _stats_log_pmf(self, stats): """Evaluate the log-PMF for the given distribution of bonus points across the different stats. Parameters ---------- stats: numpy array of length 6 The distribution of bonus points across the stats Returns ------- float The value corresponding to log(p(stats)) """ # There are never any leftover bonus points, so the sum of the # stats gives us the total bonus. total_bonus = np.sum(stats) # First calculate the probability of the total bonus logp_bonus = self._bonus_log_pmf(total_bonus) # Then calculate the probability of the stats logp_stats = self.stats_dist.log_pmf(stats) # Then multiply them together (using addition, because we are # working with logs) log_pmf = logp_bonus + logp_stats return log_pmfCopy the code
_statS_log_PMF In turn, the method relies on _bonus_log_PMF, which calculates the probability of the overall bonus:
def _bonus_log_pmf(self, bonus):
"""Evaluate the log-PMF for the given bonus.
Parameters
----------
bonus: integer
The total bonus.
Returns
-------
float
The value corresponding to log(p(bonus))
"""
# Make sure the value that is passed in is within the
# appropriate bounds
if bonus < 0 or bonus >= len(self.bonus_dist.p):
return -np.inf
# Convert the scalar bonus value into a vector of event
# occurrences
x = np.zeros(len(self.bonus_dist.p))
x[bonus] = 1
return self.bonus_dist.log_pmf(x)
Copy the code
We can now create our distribution as follows:
>>> import numpy as NP >>> from RPG import MagicItemDistribution >>> Bonus_probs = Np. array([0.0, 0.55, 0.25, 0.12, 0.06, 0.02]) >>> STATs_probs = Np.ones (6) / 6.0 >>> RSO = NP.random.RandomState(234892) >>> Item_dist = MagicItemDistribution(bonus_probs, stats_probs, rso=rso)Copy the code
Once created, we can use it to generate a few different projects:
>>> item_dist.sample()
{'dexterity': 0, 'strength': 0, 'constitution': 0,
'intelligence': 0, 'wisdom': 0, 'charisma': 1}
>>> item_dist.sample()
{'dexterity': 0, 'strength': 0, 'constitution': 1,
'intelligence': 0, 'wisdom': 2, 'charisma': 0}
>>> item_dist.sample()
{'dexterity': 1, 'strength': 0, 'constitution': 1,
'intelligence': 0, 'wisdom': 0, 'charisma': 0}
Copy the code
And, if we wish, we can evaluate the probability of sampling items:
>>> item = item_dist.sample() >>> item {'dexterity': 0, 'strength': 0, 'constitution': 0, 'intelligence': 0, 'wisdom': 2, 'charisma': 0} >>> ITEM_dist. Log_pmf (item) -4.9698132995760007 >>> Item_dist. PMF (item) 0.0069444444444444444444441Copy the code
Estimated attack damage
We’ve already seen one use of sampling: generating random items dropped by monsters. I mentioned earlier that you can also use sampling when you want to estimate something from the entire distribution, although in some cases we can use our method MagicItemDistribution to do this. For example, suppose the damage in our RPG is done by rolling a certain number of D12s (twelve sided dice). By default, players can roll a die and add dice based on their power bonus. So, for example, if they have a +2 strength bonus, they can roll three dice. The damage done is the sum of the dice.
We might want to know how much damage the player is likely to do after finding a certain number of weapons; For example, as a factor in setting the difficulty of monsters. Assuming that after collecting two items, we want the player to be able to defeat the monster in three strikes in about 50% of the battles. How much health should monsters have?
One way to answer this question is through sampling. We can use the following schemes:
-
Select a magic item at random.
-
Count the number of dice that will be rolled when attacking, depending on the item’s reward.
-
Generates a sample of damage from more than 3 hits based on the number of dice to be rolled.
-
Repeat steps 1-3 several times. This leads to approximations of damage distribution.
Enforcement of distribution of damage
class DamageDistribution(object): def __init__(self, num_items, item_dist, num_dice_sides=12, num_hits=1, rso=np.random): """Initialize a distribution over attack damage. This object can sample possible values for the attack damage dealt over `num_hits` hits when the player has `num_items` items, and where attack damage is computed by rolling dice with `num_dice_sides` sides. Parameters ---------- num_items: int The number of items the player has. item_dist: MagicItemDistribution object The distribution over magic items. num_dice_sides: int (default: 12) The number of sides on each die. num_hits: int (default: 1) The number of hits across which we want to calculate damage. rso: numpy RandomState object (default: np.random) The random number generator """ # This is an array of integers corresponding to the sides of a # single die. self.dice_sides = np.arange(1, num_dice_sides + 1) # Create a multinomial distribution corresponding to one of # these dice. Each side has equal probabilities. self.dice_dist = MultinomialDistribution( np.ones(num_dice_sides) / float(num_dice_sides), rso=rso) self.num_hits = num_hits self.num_items = num_items self.item_dist = item_dist def sample(self): """Sample the attack damage. Returns ------- int The sampled damage """ # First, we need to randomly generate items (the number of # which was passed into the constructor). items = [self.item_dist.sample() for i in xrange(self.num_items)] # Based on the item stats (in particular, strength), compute # the number of dice we get to roll. num_dice = 1 + np.sum([item['strength'] for item in items]) # Roll the dice and compute the resulting damage. dice_rolls = self.dice_dist.sample(self.num_hits * num_dice) damage = np.sum(self.dice_sides * dice_rolls) return damageCopy the code
The constructor takes as arguments the number of sides of the die, the number of damage we want to count, the number of items the player has, the distribution of magic items (type MagicItemDistribution), and a random state object. By default, we set num_dice_sides to 12, because while it is technically a parameter, it is unlikely to change. Similarly, we set num_hits to 1 by default because a more likely use case is that we only want to sample damage once for a single hit.
Then we go to sample. (Note the structural similarity with MagicItemDistribution.) First, we generate a set of magic items that the player might own. We then looked at the intensity statistics for these events and counted the number of dice to roll from them. Finally, we roll the dice (again relying on our trusty polynomial function) and calculate the damage done.
What happened to evaluating probabilities?
You may have noticed that we did not include log_PMF or PMF in our function DamageDistribution. This is because we don’t actually know what PMF is supposed to be! This is going to be the equation
What this equation says is that we need to calculate the probability of each possible amount of damage, given each possible set. We can actually calculate it by brute force, but it’s not going to be pretty. This is actually a perfect example of a solution that we want to use sampling to approximate a problem that we can’t (or can’t) calculate precisely. Therefore, instead of providing a method for PMF, we will show in the next section how to approximate the distribution using many samples.
The approximate distribution
Now we have a mechanic to answer our earlier question: If the player has two items, and we want the player to beat the monster within three z moves 50% of the time, how much health should the monster have?
First, we create our distribution object, using item_dist and rso the same object we created earlier:
>>> from rpg import DamageDistribution
>>> damage_dist = DamageDistribution(2, item_dist, num_hits=3, rso=rso)
Copy the code
Now we can plot a bunch of samples and calculate the 50th percentile (damage value of samples greater than 50%) :
>>> samples = np.array([damage_dist.sample() for i in xrange(100000)]) >>> samples.min() 3 >>> samples.max() 154 >>> Np. The percentile (samples, 50) 27.0Copy the code
The range of damage a player can do is pretty wide, but it has a long tail: 27 points in the 50th percentile, meaning that in half the sample, the player did no more than 27 points of damage. So, if we wanted to use this criterion for monster difficulty, we would give them 27 health.
summary
In this chapter, we have seen how to write code to generate samples from nonstandard probability distributions and how to calculate the probabilities of those samples. In studying this example, we have covered several design decisions that apply to the general case:
-
Use classes to represent probability distributions and include functions for sampling and evaluating PMF (or PDF).
-
PMF (or PDF) is calculated using logarithms.
-
Generate samples from random number generator objects to achieve repeatable randomness.
-
Write the input/output transparent function (for example, use the dictionary as the output of the MagicItemDistribution. Sample), while still exposed to these functions don’t know but more efficient and pure digital version (for example, MagicItemDistribution _sample_stats).