This is Words and Buttons Online — a collection of interactive #tutorials, #demos, and #quizzes about #mathematics, #algorithms and #programming.
Let’s presume you already know how polynomial interpolation works and how polynomials behave globally. If not, please see Polynomial approximation and interpolation and Trippy polynomials in arctangent scale.
In “trippy polynomials”, there was an argument that while you can model continuous functions with polynomials locally, so on some specific interval with no discontinuities, modeling some functions globally, so on the whole numeric range is simply impossible due to the limited nature of polynomials themselves.
For instance, in computing, we normally model the sine function with a polynomial but only in the [0, π/2] range. Then we use special formulas to replicate the model and make it periodic. Why do we have to do that? Because polynomials are not periodic. Sine is infinitely differentiable, it can have therefore afford infinite extremums, and polynomials are finite, they can not.
Another example is a logarithm function. We can build an approximation for a logarithm on any interval in (0, ∞]. But not in 0. Logarithm has a vertical asymptote in 0, polynomials, by their nature, don’t have any.
We can’t do much about the first problem, besides, it’s not that much of a problem to begin with, but we can still model functions with asymptotes by going just one step away from polynomial interpolation and adopting a rational one.
The name “rational” comes from ratio. The formula we’re going to use to model things is a ratio of two polynomials.
R(x) = | P1(x) |
P2(x) |
We already know how to build a
Well, ambiguously. Normally, there is only one polynomial of N-1 degree that goes through N points, but with rational interpolation, there are N-1 configurations too. E. g. for a set of four points, there could only be one cubic polynomial of 3rd degree:
P(x) = ax3 + bx2 +cx + d
But for the same four point rational interpolation, there are already 4 diffrent minimal degree configurations with of rational functions:
R(x) = | ax2 + bx + c |
d |
R(x) = | ax + b |
cx + d |
R(x) = | a |
bx2 + cx + d |
R(x) = | 1 |
ax3 + bx2 +cx + d |
I’m saying “minimal degree configurations” because technically, this is a rational function too:
R(x) = | ax234 + b123 |
cx456 + d345 |
And it’s obviously different from:
R(x) = | ax + b |
cx + d |
Althoguh they can both interpolate 4 points.
For all the configurations, the method of obtaining coefficients is essentially the same. Remember what we did to get an interpolating polynomial? We put input points into the polynomial creating a system of equations, and then used a linear solver on it. We can do the same here.
For instance, for a four-point set (xi, yi) and the 2-by-2 minimal degree configuration, the equations would be:
ax1 + b | = y1 |
cx1 + d |
ax2 + b | = y2 |
cx2 + d |
ax3 + b | = y3 |
cx3 + d |
ax4 + b | = y4 |
cx4 + d |
But wait, you can say, these are not linear. Ah! Yes, not yet. But wait until we multiply both sides of every equation by (cxi + d) where (cxi + d) ≠ 0.
ax1 + b = y1 (cx1 + d)
ax2 + b = y2 (cx2 + d)
ax3 + b = y3 (cx3 + d)
ax4 + b = y4 (cx4 + d)
Then we expand and move the expressions to a single side of the equations so, for instance,
ax1 + b = y1 (cx1 + d)
becomes
ax1 + b = y1cx1 + y1d
and then
ax1 + b - y1cx1 - y1d = 0
This doesn’t look like a linear equation at first sight still, but remember, we’re hunting a, b, c, and d here. The xi and yi are our input data. So when we put our equations together, we'll get a system of linear equations after all:
ax1 + b - y1cx1 - y1d = 0
ax2 + b - y2cx2 - y2d = 0
ax3 + b - y3cx3 - y3d = 0
ax4 + b - y4cx4 - y4d = 0
We can totally solve that with a linear solver.
On the interactive plot below, you can add or move points to familiarize yourself with how rational interpolation feels like:
Now for a more practical task. Or the one that looks practical anyway. Let’s model a log2(x) function.
Logarithm base 2 is a function that for every positive argument y, returns x so 2x = y.
log2(1) = 0
log2(2) = 1
log2(4) = 2
log2(8) = 3
. . .
log2(65536) = 16
. . .
We can, of course, use the Taylor series to model this function in some range with a polynomial, but log2(x) isn’t defined for x ≤ 0. Moreover, it has an asymptote in 0 meaning that the function falls indefinitely into minus infinity as we come closer and closer to 0. We can’t possibly model this effect with a polynomial.
However, with rational interpolation, this would be easy. A vertical asymptote occurs when the denominator of the fraction is 0. And the denominator is zero when the polynomial representing the denominator has a root.
For a simple log2(x) model, we can use this ratio:
R(x) = | ax2 + bx + c |
dx |
Note that this is not a minimal degree configuration, we need x in the denominator so we could have a root in it.
There are 4 coefficients, so to compute all the coefficients, we need to put 4 facts about the log2(x) into the equation soup. Let them be these:
log2(x) has an asymptote in 0. So dx(0) = 0.
log2(1) = 0.
log2(2) = 1.
log2(4) = 2.
Also, this system does not have a single solution but a whole family of solutions since the ratio itself introduces ambiguity. E.g. 1 / 2 is the same as 2 / 4 or 5 / 10 or k / 2k in general.
To avoid having a parametric solution, let’s just take one of the coefficients and pin it down as a specific number. Any non-zero number will do, and any coefficient too. My personal preference would be d = 1. So with my preference in mind, the code that solves the system symbolically looks like this:
from sympy import * a, b, c, d, N, D, R = symbols('a b c d N D R') x = symbols('x') N = a*x*x + b*x + c D = d*x R = N/D # R(0) = -inf # R(1) = 0 <=> N(1) = 0 # R(2) = 1 <=> N(2) - D(2) = 0 # R(4) = 2 <=> N(4) - 2*D(4) = 0 eqs = [ d - 1, D.subs(x, 0), N.subs(x, 1), (N - D).subs(x, 2), (N - 2*D).subs(x, 4) ] sol = solve(eqs, (a, b, c, d)) print(sol)
Here is the solution:
{a: 1/3, b: 1, c: -4/3, d: 1}
And here is the solution visualized:
The red line is the real log2(x). The blue line is the model. The plot is navigable.
This is not a terribly precise model, and it has an extra bit for x < 0 that we don’t want. But it is simple and for its simplicity, it is good enough. Better than the polynomial one.
Polynomial interpolation is fine, and for most intents and purposes, we can stick to that. But for some problems, you need to extend your modeling capabilities a bit, and then rational interpolation comes to help.
Rational interpolation is also a gateway term to rational Bezier functions and non-uniform rational basis splines also known as NURBS. If you want to understand NURBS well, you should start with simple rational functions first. They were a missing link in my curriculum, and I struggled with NURBS for years after college. Luckily, now you don’t have to.
Index #mathematics #tutorials | ← there's more. |
+ Github & RSS |