Original link: tecdat.cn/?p=13734
Tuo End number according to the tribe’s official number
For actuarial science, when we’re dealing with the sum of independent random variables, the eigen function is interesting, because the eigen function of the sum is the product of the eigen functions.
-
introduce
In probability theory, let’sfor 和 forIt’s the cumulative distribution function of some random variable, i.e.,. What is a moment generating function, i.e., ?
How to write ?
In probability textbooks, the standard answer is
- ifIs discrete
- if(absolutely) continuous,
Yes the density. hereIt’s obviously not a discrete variable. But it’s continuous. You need to plot the distribution function to see,And for all
We have a discontinuous 0. So we have to be careful here:It’s neither continuous nor discrete. Let’s use the formula,
If you could write it.
This simply means that the population average is the center of gravity of the average of each subgroup.And then let the 而 ).
Let’s consider three different components.
(because it’s a real-valued constant), right here.
So finally, we calculate. Look at theFor a givenIs an (absolutely) continuous random variable with density. Observe all.
and, i.e.,For a givenIt’s exponential.
As a result,It’s a mixture of exponential variables and Dirac masses. This is actually the tricky part of the problem, because when we look at the formula above, it’s not obvious.
From now on, this is high school math,
if. If you put them all together
- Monte Carlo calculations
The function can be calculated using Monte Carlo simulations,
Ifelse > = function F (x) (x < 0, 1 - exp (x) / 3) > Finv = function (u) uniroot (function F (x) (x) - u, c (9, 1 e4) - e - 1) $rootCopy the code
Or (to avoid discontinuous problems)
Ifelse > Finv = function (u) (u > 1, 0, 3 * uniroot (function (x) + F (x) - u, c (9, 1 e4) - e - 1) $root))Copy the code
Here, the inverse is easy to get, so we can use it
Then, we use
> plot(u,v,type="b",col='blue')
> lines(u,Mtheo(u),col="red")
Copy the code
The problem with Monte Carlo simulations is that they should only be used if they are valid. I can calculate
> M(3)
[1] 5748134
Copy the code
Finite summation can always be computed numerically. Even hereDoes not exist. Just like the average of Cauhy’s sample, I can always calculate it even if the expected value doesn’t exist
> scheme (rcauchy (1000000) [1] of 0.006069028Copy the code
These generators are interesting when they exist. Maybe using eigen functions is a better idea.
- Generating function
First, let’s define those functions.
ifSmall enough.
Now, if we use Taylor’s expansion
and
If we look at the derivative of this function at 0, then
A moment generating function can be defined for some random vectors in higher dimensions.
If you want to derive moments for a given distribution, some moment-generating functions are interesting. Another interesting feature is that in some cases, this moment generating function (under some conditions) completely characterizes the distribution of random variables..
For all peopleAnd then
- Fast Fourier transform
If you think back to Euler’s formula,
So you’re not surprised to see the Fourier transform. From this formula, we can write
Using some results from Fourier analysis, we can show that the probability function satisfies
Or you could write it as
A similar relationship can be obtained if the distribution at points is absolutely continuous.
In fact, we can show that,
The cumulative distribution function can then be obtained using the inversion formula of Gil-Peleaz obtained in 1951,
This means that anyone working in financial markets knows the formula used to price options (see, for example, Carr&Madan (1999)). The advantage is that you can use any mathematical or statistical software to calculate these formulas.
- Eigen functions and actuarial science
For actuarial science, when we’re dealing with the sum of independent random variables, the eigen function is interesting, because the eigen function of the sum is the product of the eigen functions. Consider the problem of calculating the 99.5% quantile of the compound sum of the Gamma random variable, i.e
和 . The strategy is to spread out the losses,
Then, to calculate the codeAnd we use
99.5% quantile
> sum(cumsum(f)<.995)
Copy the code
Consider the following loss amounts
> print(X[1:5])
[1] 75.51818 118.16428 14.57067 13.97953 43.60686
Copy the code
Let’s fit a gamma distribution. We can use
Shape Rate 1.309020256 0.013090411 (0.117430137) (0.001419982)Copy the code
> alpha
[1] 1.308995
> beta
[1] 0.01309016
Copy the code
In any case, we have a Gamma distribution parameter for individual losses. And assume that the mean value of poisson count variables is
> lambda <- 100
Copy the code
Again, you can use Monte Carlo simulations. We can use the following generic code: First, we need functions to generate the two variables of interest,
If we generate a million variables, we can get estimates of quantiles,
> set.seed(1) > quantile(RCPD4 (1E6),.995) 99.5% 13651.64Copy the code
Another idea is to remember the scale of the Gamma distribution: the sum of the independent Gamma distributions is still Gamma (there are additional assumptions on the parameters, but we consider the same Gamma distribution here). So you can compute the cumulative distribution function of the compound sum,
If we solve for that function, we get quantiles
> uniroot () $root [1] of 13654.43Copy the code
This is consistent with our Monte Carlo calculations. Now, we can also use the Fast Fourier transform here,
> sum(cumsum(f)<.995)
[1] 13654
Copy the code
Let’s compare the computing time to get these three outputs
> system. Time user system elapsed 2.453 0.106 2.611 > system. Time user system Elapsed 0.041 0.012 0.361 > system User System Elapsed 0.527 0.020 0.560Copy the code