Noise is awesome. GPUs are awesome. Let’s but the first onto the latter. Great, let’s go!
Perlin noise vs. Simplex noise
People are in general very confused about the Perlin noise algorithm. It is defined by Kenneth Perlin in his paper from 1985, found here. Later in 2002 he revised it and improved it significantly which is why the new algorithm has a new name; namely Simplex noise. The paper on the Simplex noise algorithm can be found here.
Simplex noise is more complicated than standard Perlin noise and thus merits its own blog post. Focusing on Perlin noise is a good start though since Simplex noise is more an evolution of Perlin noise.
We want a mathematical function, call it N for noise, that takes as its input a value and returns a value. We would like this function to have a certain property such that the values it returns are random but not too random. There are a lot of different kinds of randomness but what we want is pseudorandomness. In short this means that the produced values appear to be random but are in fact generated by a completly deterministic algorithm and thus the sequence is fixed.
Here is an example of a number sequence in which the values are random. N(x_0) = y_0, N(x_1) = y_1, …
Here is an example of a number sequence in which the values are pseudorandom. N(x_0) = y_0, N(x_1) = y_1, …
Notice how x_i always gives us the same y_i this is an important property.
For our function N to be a noise function the number sequence produced by it must have some kind of structure. It needs something that governs how the values change and at which rate. There must be some kind of continueum in the sequence of numbers or lets say a smoothness.
Here is an example of a number sequence in which the values are smooth. N(x_0) = y_0, N(x_1) = y_1, …
Controlling the property how the sequence of number changes is what a noise function does in a sense.
Perlin noise in 2D works on an uniform integer coordinate grid. On each of the vertices of the grid (where the lines meet) the
Both Perlin noise and Simplex noise are algorithms which depend on a coordinate grid. The grid is used as a sort of point of reference for the input coordinates.
Since we are generating an image every pixel is independent of one another. Therefore we can run the Perlin noise algorithm over all of the pixels in parallel which is super awesome. Lets call this feature pixel-parallel.
It turns out that there are parts in the algorithm itself that are inherently independent of each other. We might want to investigate whether or not this will yield any performance increases. We start by breaking about the algorithm into parts and analyze their dependencies on each other. We will represent this with a direct acyclic graph (DAG).
Given n we can see that blablabla