Written on July 30, 2017.


reading
“Complex”Book notes and extensions.


The ecology, mathematics and computer science involved in this paper are superficial, if there is any mistake, welcome to correct.

Is the rabbit

We’ve sort of been exposed to the question:

  1. At the beginning of the first month there was a pair of newborn rabbits
  2. After the second month (the beginning of the third month) they can reproduce
  3. Each fertile pair of rabbits produces a new pair each month
  4. Rabbits never die

Q: In month n, how many rabbits are there?

M: Yes, the number of rabbits in Hanoi is known as the Fibonacci number. Many people know recursion from that question.

But how can rabbits “never die”? So let’s start thinking about rabbit birth and death rates. Start with the easy ones: each parent gives birth to four rabbits a year and then dies. If there were two rabbits at the beginning (year 0), then there would be four rabbits in year 1 and eight rabbits in year 2… The number of rabbits doubles every year. Remember the firsttThe number of rabbits in the year is, then:, i.e.,

It is clear that, if unchecked, the rabbit population will grow and eventually fill the planet, and possibly the universe.

What if we consider the constraints on population growth? It is conceivable that if the population grows larger, many of the bunnies may die without breeding because they are too crowded, lack of food and space. The population as a whole would not have grown infinitely. Biologists use something calledlogistic modelTo describe population growth in this case:. Among them,Is the population size of this generation,Is the birth rate,Is the mortality rate,kIs the carrying capacity.

For example, if the birth rate is 2, the death rate is 0.4, and the carrying capacity is 32, generation 1 has 20 rabbits, using the model above, then generation 2 has 12 rabbits; If you plug that in, you still have 12 rabbits in the third generation; Since then the population has remained at 12.

If we change some parameters, such as lowering the mortality rate to 0.1, then the model calculates 14.25 rabbits in generation 2 and 15.01816 rabbits in generation 3 (never mind why the number of rabbits can be decimal). In fact, what we do is we put the results of this generation into the formula of the model, and then we calculate the results of the next generation, and this repeated process is iteration, iteration of the model.

logistic map

Iterative calculation by logistic model is still a little troublesome, so we can simplify it a little more: Combining birth and death rates into a variable R, the population size is replaced by the carrying rate (the ratio of the current population size to the maximum possible population size) x, x = n/k, and since the current population size is always between 0 and K, x is always between 0 and 1.

Then we can rewrite the above logistic model formula, so we have:. This equation, called Logistic map, is an important equation in dynamical system theory and chaos research.

If we change the value of R, then the Logistic map becomes interesting.

Assuming R=2, we find that no matter what the initial value of x is (try 0.2 first), it always ends up at the same fixed value:

Different initial values will only make 0.5 faster or slower, but after several steps, 0.5 will be reached. This 0.5 is then called the fixed point.

If we increase the value of R, maybe this fixed point still exists, but it gets slower and slower. If R=3.1, let’s take a look and see that the situation seems to be different:

R = 3.1, x0 = 0.2

The x value will eventually oscillate between 0.5580141 and 0.7645665. We call this attractor.

As R changes, things get more complicated. Here’s my excerpt from Wikipedia:

  1. Between 0 and 1: whatever the initial value, x gets smaller and smaller until it approaches 0.
  2. Between 1 and 2: whatever the initial value, x rapidly approaches (r-1)/R.
  3. Between 2 and 3: Over several iterations, x will also get closer and closer to (r-1)/R, but will initially oscillate around this value, and the rate of convergence is linear.
  4. 3: X will still get closer and closer to (r-1)/R, but the rate of convergence is extremely slow and not linear.
  5. Between 3 and 1+√6 (about 3.45) : for almost all initial values, the end will oscillate continuously between the two values, i.e., x will end up being a, B, A, B… The change in value of R.
  6. Between 3.45 and about 3.54, for almost all initial values, x ends up oscillating continuously between four values.
  7. Approximately greater than 3.54: x ends up at 8, 16, 32 values… As for when R will change the value of x from n to 2n, the feigenbaum constant 𝞭 = 4.669… The relevant.
  8. Approximately 3.5699: such vibrations disappear and the whole system begins to be in a state of chaos. For almost all initial values, there will not be a fixed period of oscillation. No matter how small the initial value changes, the results will have obvious differences over time, which is the characteristic of typical chaos.
  9. Greater than 3.5699: The entire system is in a chaotic state. However, there are certain values of R that make the system non-chaotic and have periodic results, and these intervals are called “islands of stability”. For example, when R is greater than about 3.82, there will be a period of 3 values, and a little more, there will be a period of 6 values and 12 values.
  10. When R ranges from about 3.5699 to about 3.8284, the evolution of chaotic properties of a system is sometimes called a Pomeau-Manneville scenario, characterized by periodic oscillations and non-periodic behavior. This feature can be applied to semiconductor components. There are other regions where the period of the system is 5 values, where there is some particular R for any period, so that the period is specified.
  11. Greater than 4: for almost all initial values, the system will eventually exceed the interval [0,1] and diverge.

It might seem a little boring, but that’s okay, so let’s draw a picture, just to get a sense of this chaotic system. Take an R value greater than 3.5699, but less than 4. Here we might as well take R = 4 and start x with 0.2, then iterate 100 times:

R = 4, x0 = 0.2

A sense of chaos. For each value of x, although it uniquely determines what the next value of x is, they combine into a trajectory that looks very random. In computers, we can even use it to generate pseudo-random numbers. Thus, surface randomness can come from very simple deterministic systems.

Moreover, for a chaotic R value, if there is any uncertainty about the initial value, then the trajectory after a certain time cannot be predicted. Again, let’s think about changing the initial value a little bit, like changing the 10th decimal place, and assuming the initial value is 0.2000000001. This is very close to 0.2. Let’s plot the two trajectories together:

You can see that at about t equals 30, the two trajectories have separated, and the value of x is no longer predictable. In fact, as long as there’s some uncertainty about the initial value, no matter how many decimal places it is, eventually it becomes unpredictable when t is greater than a certain value.

Pseudo random algorithm

Next, we try to use the chaotic characteristics of logistic model to implement a simple pseudo-random algorithm.

function* RandomGenerator(a) {
  let seed = .2;
  function getNext(x) {
    return 4 * x * (1 - x);
  }
  for (let i = 0; i < 30; i++) {
    seed = getNext(seed);
  }
  while (true) {
    seed = getNext(seed);
    yield seed;
  }
}

const randomGenerator = RandomGenerator(a);

function random (a) {
  return randomGenerator.next().value;
}
Copy the code

In fact, some common pseudo-random number generation algorithms, such as linear congruence method, also use similar means to generate chaotic systems through iteration. And what this kind of algorithm is looking for, I think, is three things:

  1. Statistical randomness, that is, random number results paging evenly, can not be biased.
  2. Larger cycle intervals. An algorithm like the one above, influenced by the R value and the precision of the computer’s floating-point numbers, must at some point produce the same result as before, causing the iteration to go into a loop. With linear congruence algorithm, the cycle interval is related to the selected modulus.
  3. Faster performance.