This is Words and Buttons Online — a collection of interactive #tutorials, #demos, and #quizzes about #mathematics, #algorithms and #programming.

SWInE: Simplicial Weight Interpolation and Extrapolation

I've been procrastinating on this for more than ten years. The thing is, this particular algorithm is my brainchild. While working on my thesis, I stole exactly three ideas from other people, made them work together, and gave that compilation a funny name. This counts as authorship, right?

I had my fun playing with the concept but the conclusion I had to put into my thesis was: “SWInE is rather promising”. In academic language, this means “useless”. Almost everything you can do with SWInE, you can do with splines better. And this brings the dilemma. On one hand, I can share a unique piece of knowledge very few people in the world possess; on the other hand, this piece of knowledge is worthless.

Finally, I came up with this argument. The three ideas behind the algorithm are prominent. And if an algorithm helps show them in action, then maybe this is what makes the algorithm worthy.

Idea 1: inverse weights

Let's say we have two points (x1, y1) and (x2, y2). There is a function F(x) that fits these points. Fits means that

F(x1) = y1
F(x2) = y2

When we determine this function in between points, it is called interpolation. When do that on the left and on the right of the points — extrapolation.

Using the inverse weights is the way to build the interpolating function everywhere apart from the points themselves.

∀ x, xi < x < xi+1   F(x) =   yi * k(x - xi) + yi+1 * k(xi+1 - x)
k(x - xi) + k(xi+1 - x)
k(x) =   1
xn

Yes, there would be a zero division in xi so the function is left undefined there. But we can fix that with the magic of limits. The function becomes infinitely close to the points in their respected proximities, so we can make it continuous by defining it in the points explicitly

F(xi) = yi

Also, the function outside the points can be defined as the first point on the left, and as the last point on the right. This will work as a primitive extrapolation.

The n-coefficient in the weight function affects the way the function looks. You can experiment with it on the following interactive plot.

Do: , , ;

with the weight coefficient:

When the weight coefficient equals zero, the function becomes discontinuous. Negative coefficients are useless for interpolation. However, fractional coefficients may be useful. They bring more control over the function with no additional changes in the algorithm.

Mathematically, the inverse weights are simple. But there is no such thing as a limit in floating-point computation, there is no infinitely small proximity. There is no grace, only digits and errors. You have to define the interpolating function explicitly not only in xi points but in some measurable proximity too.

Then again, every computation on floating-point numbers is discrete so pragmatically this is just one more “guess an epsilon” problem.

Idea 2: basis functions

We can now turn points into functions. That's awesome. But with inverse weights, the interpolating function always makes a plateau near the xi points. What if we want it to have a slope instead?

The idea of basis functions is: we can interpolate functions instead of points.

F(x) =   fi(x) * k(x - xi) + fi+1(x) * k(xi+1 - x)
k(x - xi) + k(xi+1 - x)

And we can switch values to basis functions for the extrapolation as well.

Technically, every continuous function f(x) such as f(xi) = yi can be a basis function. Let's try things out with a few selected types of polynomial functions: the constant (which is a 0-polynomial in a way), the linear function, and the quadratic function.

fi(x) = yi
fi(x) = ax + b; axi + b = yi
fi(x) = ax2 + bx + c; axi2 + bxi + c = yi

The magic of inverse weights is in their indifference. They can work with any basis functions and it's easy to mix them up in a single formula to get something you want.

On this interactive plot, you can see how different types of basis functions work together in different configurations.

Please select one of the following configurations:
;
;
;
;
;
.


And the weight coefficient:

With inverse weights and basis functions, we have almost too much freedom. We can have our function smooth on some interval, and sharp at one particular point. Or we can make it lay on a plateau near some point, and then fit some curvature in an interval.

But so far it only concerns single variable functions. If we want to generalize weight interpolation to the n-dimensional case, we have to use an n-dimensional interval generalization.

Idea 3: self-referential n-simplex data structure

A point is a 0-simplex. A line segment, a piece of line between two points, is a 1-simplex. A triangle, a region on a plane inside three line segments, is a 2-simplex. A tetrahedron is a 3-simplex. And if you can imagine how tetrahedron becomes a point while traveling through the 4-th dimension, that would be a 4-simplex. I hope you see a pattern here.

An n-simplex is the space between n+1 (n-1)-simplices.

This makes it a self-referential data structure. If we want to define an interpolating function F1 on a line, F2 on a triangle, and F3 on a tetrahedron, we only have to define Fn once on an n-simplex.

With basis functions weight interpolation, you already have your interpolating function defined in the basis points.

F(x) = Fn(x)
x = xi ⇒ Fn(x) = fi(x)

Consequently, this defines F0(x). For all the other n-simplices, the self-referential formula Fn(x) looks like this.

x, xxi   Fn(x) =  
n+1k(|xj - x|) Fn-1(xj)
Σ
j=1
n+1k(|xj - x|)
Σ
j=1

The formula looks almost like the one with the basis functions except now all the variables in bold are points: x, xi, xj.

Speaking of which, since we want to interpolate basis functions in an n-simplex, we should compute our basis functions out of distances to (n-1)-simplices. The xj is the projection of x onto the j-th (n-1)-simplex. The |xj - x| is then the distance from a point to its projection.

Weight functions, basis functions, and now the (n-1)-simplex projections. This is enough to define an interpolation function on a triangle or a tetrahedron. But what about more complex shapes?

When we have a bunch of simplices that touch each other but never overlap, this is a simplicial complex. If you're ok with some representation error, you can cover any shape with a simplicial complex.

And what we can't cover with a complex, we can cover with spatial extrapolation.

With weight and basis functions, you can mix your interpolating function in n-simplices with interpolating functions on (n-1)-simplices. Now if a point isn't in an n-simplex but has a projection on an (n-1)-simplex, then computing the weights on this projection and applying them to the basis functions will make an extrapolation that will seamlessly merge with the interpolation.

The same goes to (n-2)-simplices, (n-3) or (n-k) ones. Let's devise an algorithm by dividing interpolation and extrapolation into 3 cases.

  1. If a point is in a triangle, then the F2 applies;
  2. if a point has a projection on the complex border line segments, then the F1 works instead;
  3. if a point doesn't have the projection, then the basis function of the nearest point fi is the extrapolation function.

Well, this particular algorithm doesn't work with concave shapes. Technically, we can still turn any concave simplicial complex into a convex one by adding more simplices. That's just more coding.

Here is an interactive plot. It has 6 triangles combined in a complex. Every point carries its own shade of gray, and the rest of the picture is the result of simplicial weight interpolation and extrapolation. The points are moveable, but the relations between triangles are hard-coded.

Do: , , .

With basis function: ; .

And the weight coefficient:

And that's how we got from a pair of points to Simplicial Weight Interpolation and Extrapolation. Just as promised in the title.

P. S.

The code for all the plots on GitHub: