Of Infinitesimals and Exponents
Is there such a thing as an raising a number to an infinitesimal power?
Infinitesimal quantities, useful as they were to the development of calculus, have been deprecated in favor of limits. However, it is not the case that their existence is completely unjustified. In fact, it is rather easy to devise matrices whose powers “die off” at different rates:
\begin{gather*} \epsilon_2 = \begin{pmatrix} 0 & 1 \\ 0 & 0 \end{pmatrix}, ~~ \epsilon_3 = \begin{pmatrix} 0 & 1 & 0 \\ 0 & 0 & 1 \\ 0 & 0 & 0 \end{pmatrix}, ~~ \epsilon_3^2 = \begin{pmatrix} 0 & 0 & 1 \\ 0 & 0 & 0 \\ 0 & 0 & 0 \end{pmatrix} \\ \epsilon_2^2 = \bf{0}, ~~ \epsilon_3^3 = \bf{0} \end{gather*}
For the purposes of this post, I wish to narrow the kind of infinitesimal under consideration strictly to \varepsilon = \epsilon_2.
Their role in justifying calculus is at this point spent. However, it is also fun (and often interesting) to devise new “numbers” to see how they interact with preexisting structures. For example, what does it mean to take a number to an infinitesimal power (or more generally, a matrix power)?
A Frank Evaluation
Since exponentials and logarithms are inverses, it might make sense to argue:
x^\varepsilon = e^{\ln{x^\varepsilon}} = e^{\varepsilon \ln{x}}
e^x has a very nice power series, so we can try plugging this expression in:
\begin{align*} e^x &= \sum_{n=0}^\infty {x^n \over n!} = 0 + x + {x^2 \over 2!} + {x^3 \over 3!} + ... \\ e^{\varepsilon x} &= \sum_{n=0}^\infty {(\varepsilon x)^n \over n!} = 1 + {\varepsilon x} + {(\varepsilon x)^2 \over 2!} + {(\varepsilon x)^3 \over 3!} + ... \\ &= 1 + {\varepsilon x} \\ e^{\varepsilon \ln x} &= 1 + {\varepsilon \ln x} \end{align*}
Befitting its nature “opposite” to the infinity, the infinitesimal \varepsilon transforms an infinite sum into a finite one. \varepsilon is realized as a square matrix (since otherwise powers could not exist), so evaluating the exponential more directly:
\begin{align*} &\phantom{=\ } e^{\varepsilon \ln{x}} = \exp{\begin{pmatrix} 0 & \ln{x} \\ 0 & 0 \end{pmatrix}} \\ \exp{ \begin{pmatrix} 0 & \ln{x} \\ 0 & 0 \end{pmatrix} } &= \begin{pmatrix} 0 & \ln{x} \\ 0 & 0 \end{pmatrix}^0 + \begin{pmatrix} 0 & \ln{x} \\ 0 & 0 \end{pmatrix}^1 + {1 \over 2}\begin{pmatrix} 0 & \ln{x} \\ 0 & 0 \end{pmatrix}^2 + ... \\ &= \begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix} + \begin{pmatrix} 0 & \ln{x} \\ 0 & 0 \end{pmatrix} + {1 \over 2} \begin{pmatrix} 0 & \ln{x} \\ 0 & 0 \end{pmatrix} \begin{pmatrix} 0 & \ln{x} \\ 0 & 0 \end{pmatrix} + ... \\ &= \begin{pmatrix} 1 & \ln{x} \\ 0 & 1 \end{pmatrix} = 1 + \varepsilon \ln x \end{align*}
Of course, since \varepsilon and its matrix form are equivalent, the answer is the same as before. There are multiple problems with this argument, which are signified by reexamining the earlier equation:
x^\varepsilon \stackrel{?_1}{=} e^{\ln{x^\varepsilon}} \stackrel{?_2}{=} e^{\varepsilon \ln{x}}
- It is not obvious that x^\varepsilon is reconcilable with e^x and the natural logarithm, i.e., that composing them is still an identity with respect to \varepsilon
- The power identity for logarithms may not be obeyed by \varepsilon
Additionally, there were only two terms used in the series for e^x; there are many more power series that begin with terms 1, 1..., and the natural logarithm(’s series) is simply the one that corresponds to the inverse of e^x. Are there other compositions of a series and its inverse that can be considered?
Teeming with a lot of News
Fortunately, there is not precisely one way to identify the value of x^\varepsilon, and one in particular has much less handwaving. The binomial theorem is a very useful tool for writing the power of a sum of numbers:
(x + 1)^n = \sum_{r=0}^n {n \choose r}x^r = \sum_{r=0}^n {n! \over {r!(n-r)!}}x^k
If the binomial coefficient is asserted to be 0 for r > n, then the binomial theorem can also be written as an infinite sum. However, the denominator of {n! \over (n-r)!} doesn’t make sense, since it will be a negative factorial in this circumstance. On the other hand, multiplying n with the r numbers immediately below it can be assigned a new symbol (n)_r (named the Pochhammer symbol). This falling factorial satisfies the 0 rule since if n is an integer smaller than r, then the product will include 0, annihilating all other terms.
\begin{gather*} (n)_0 = 1,~ (n)_r = (n - r + 1)(n)_{r-1} \\ (x + 1)^n = \sum_{r=0}^\infty {(n)_r \over r!} x^r \end{gather*}
However, if n is not an integer, then the series will miss 0, and continue indefinitely. This directly gives the series for square root:
\begin{align*} \sqrt{x + 1} &= (x + 1)^{1/2} = \sum_{r=0}^\infty {(1/2)_r \over r!} x^r \\ &= 1 + {1 \over 2}x + \left({1 \over 2!}\right) \left({1 \over 2}\right) \left({1 \over 2} - 1\right)x^2 + ... \\ &= 1 + {1 \over 2}x + \left({1 \over 2!}\right) \left(-{1 \over 4}\right)x^2 + \left({1 \over 3!}\right) \left(-{1 \over 4} \right) \left({1 \over 2} - 2 \right)x^3 + ... \\ &= 1 + {1 \over 2}x - {1 \over 8}x^2 + \left({1 \over 3!}\right) \left({3 \over 8} \right)x^3 + \left({1 \over 4!}\right) \left({3 \over 8} \right) \left({1 \over 2} - 3 \right)x^4 + ... \\ &= 1 + {1 \over 2}x - {1 \over 8}x^2 + {1 \over 16}x^3 - {5 \over 128}x^4 + ... \end{align*}
Tumbling down Infinitesimals
Since this definition works for rational numbers as well as integers, is it possible to assign a value to (\varepsilon)_r? Indeed it is, since this symbol’s definition only requires that integers can be subtracted from it and that it can multiply with other numbers. In other words, it works in any integer ring, a property which the matrices underlying \varepsilon very fortunately have.
\begin{align*} (\varepsilon)_0 &= 1,~ (\varepsilon)_r = (\varepsilon - r + 1)(\varepsilon)_{r-1} \\ (\varepsilon)_1 &= (\varepsilon - 0)(1) = \varepsilon \\ (\varepsilon)_2 &= (\varepsilon - 1)(\varepsilon) = \varepsilon^2 - \varepsilon = -\varepsilon \\ (\varepsilon)_3 &= (\varepsilon - 2)(-\varepsilon) = -\varepsilon^2 + 2\varepsilon = 2\varepsilon \\ (\varepsilon)_4 &= (\varepsilon - 3)(2\varepsilon) = 2\varepsilon^2 - 6\varepsilon = -6\varepsilon \\ & ... \\ (\varepsilon)_r &= (-1)^{r-1}(r-1)!\varepsilon,~ r > 0 \end{align*}
It is easy to check that this can also be directly computed from the matrix underlying \varepsilon. We can now plug this into the binomial formula:
\begin{align*} (x + 1)^\varepsilon &= \sum_{r=0}^\infty {(\varepsilon)_r \over r!} x^r = 1 + \sum_{r=1}^\infty {(\varepsilon)_r \over r!} x^r \\ &= 1 + \sum_{r=1}^\infty {(-1)^{r-1}(r-1)!\varepsilon \over r!} x^r \\ &= 1 + \varepsilon \sum_{r=1}^\infty {(-1)^{r-1} \over r} x^r \end{align*}
The final sum may be familiar, but let’s hold off from hastily cross-referencing a table. The term inside the sum looks a lot like an integral, so simplifying:
\begin{align*} \sum_{r=1}^\infty {(-1)^{r-1} \over r} x^r &= \sum_{r=1}^\infty \int {(-1)^{r-1}} x^{r-1} dx = \int \sum_{r=0}^\infty {(-1)^{r}} x^{r} dx \\ &= \int \sum_{r=0}^\infty (-x)^{r} dx = \int {dx \over 1 + x} \end{align*}
This is looking very promising! Substituting this expression for the previous sum, we can now conclude:
\begin{align*} (x + 1)^\varepsilon &= 1 + \varepsilon \int {1 \over 1 + x} \\ x^\varepsilon &= 1 + \varepsilon \int {1 \over x} \\ &= 1 + \varepsilon \ln x \end{align*}
Fortunately, this slightly more grounded approach agrees with the initial result.
Other Elementary Algebraic Powers
Analogously, the Pochhammer symbol can be extended to other algebraic objects.
Complex
For example, the original approach with the imaginary unit i yields:
x^i = e^{\ln{x^i}} = e^{i\ln x} = \cos(\ln x) + i\sin(\ln x)
To be slightly more rigorous, we can calculate the real and imaginary components of (i)_r and tabulate the results:
| r | Real | Imaginary |
|---|---|---|
| 0 | 1 | 0 |
| 1 | 0 | 1 |
| 2 | -1 | -1 |
| 3 | 3 | 1 |
| 4 | -10 | 0 |
| 5 | 40 | -10 |
The real component corresponds to OEIS A003703, and the imaginary to OEIS A009454, which state that they have exponential generating functions \cos(\ln x) and \sin(\ln x) respectively, agreeing with the intuitive series above. In fact, reexamining the binomial formula, it obviously produces exponential generating functions (in x - 1) based on the series given by the Pochhammer symbol.
\begin{gather*} (x + 1)^n = \sum_{r=0}^\infty {(n)_r \over r!} x^r \Longleftrightarrow x^n = \sum_{r=0}^\infty {(n)_r \over r!} (x-1)^r \\ (x - 1)^n = \sum_{r=0}^\infty {(-1)^r (n)_r \over r!} x^r \Longleftrightarrow x^n = \sum_{r=0}^\infty {(-1)^r (n)_r \over r!} (x+1)^r \end{gather*}
Split-complex
How about the hyperbolic analogue of the “circular” functions above? We want a series that equals \cosh(\ln x) + j\sinh(\ln x), for some algebraic j. Unlike with i, the exponential series does not need to alternate, but merely be partitioned into even and odd components. Therefore, j has the property that j^2 = 1,~ j \neq 1. It has a very simple matrix presentation and Pochhammer sequence.
\begin{align*} j &:= \begin{pmatrix} 0 & 1 \\ 1 & 0 \end{pmatrix} \\ (j)_2 &= j(j - 1) = 1 - j \\ (j)_3 &= (1 - j)(j - 2) = -3 + 3j = -3(1 - j) \end{align*}
With a little guesswork, this can be turned into a recurrence relation.
\begin{align*} \\ (j)_r &\stackrel{?}{=} f(r)(1 - j) \\ (j)_{r+1} &= f(r+1)(1 - j) = f(r)(1 - j)(j - r) \\ &= f(r) \left( (1 - j)j - (1 - j)r^{\vphantom{1}} \right) = f(r)(j - 1 - r + jr) \\ &= f(r) \left( -(r + 1) + j(r + 1)^{\vphantom{1}} \right) \\ &= -f(r)(r + 1)(1 - j) \\ \implies f(r) &= -rf(r - 1) \\ &= {(-1)^r r! \over 2},~ r \ge 2 \end{align*}
Since this Pochhammer sequence contains r!, it (pleasingly) cancels out with the denominator of the binomial coefficient.
\begin{align*} x^j &= \sum_{r=0}^\infty {(j)_r \over r!} (x-1)^r \\ &= 1 + j(x-1) + (1-j)\sum_{r=2}^\infty {r!(-1)^r \over 2r!} (x - 1)^r \\ &= \phantom{+ j} \left(1 + {1 \over 2}\sum_{r=2}^\infty (-1)^r (x - 1)^r \right) \\ &\phantom{=} \vphantom{0} + j \left( x - 1 - {1 \over 2}\sum_{r=2}^\infty (-1)^r (x - 1)^r \right) \end{align*}
The remaining infinite sum is the same in the real and j components. It is very close to a geometric sum, so we can solve for the complete expression:
\begin{align*} \sum_{r=2}^\infty (-1)^r (x-1)^r &= \sum_{r=0}^\infty (-1)^r (x-1)^r ~-~ \sum_{r=0}^1 (-1)^r (x-1)^r \\ &= {1 \over 1 - (x - 1)} - (1 - (x - 1)) = {1 \over x} + x - 2 \\ &= {x^2 - 2x + 1 \over x} \\ x^j &= \left(1 + {x^2 - 2x + 1 \over 2x} \right) + j\left(x - 1 - {x^2 - 2x + 1 \over 2x} \right) \\ &= {x^2 + 1 \over 2x} + j{x^2 - 1 \over 2x} \end{align*}
This expression is equivalent to the expression in cosh and sinh after writing them in terms of the exponential function and simplifying the natural logarithm. Notably, its components parametrize the unit hyperbola x^2 - y^2 = 1. The components of t^i (switching variables to avoid confusion) also do so for the unit circle x^2 + y^2 = 1 (since the range of the natural logarithm is over all reals), but fail to do so rationally.
Golden
The golden ratio also has a very simple relationship with its square, in particular \phi^2 = \phi + 1. This minimal polynomial has degree 2, so numbers can be represented by 2-tuples of the form a + b\phi. The Pochhammer sequence of \phi is:
| r | Real | Phi |
|---|---|---|
| 0 | 1 | 0 |
| 1 | 0 | 1 |
| 2 | 1 | 0 |
| 3 | -2 | 1 |
| 4 | 7 | -4 |
| 5 | -32 | -19 |
Removing the alternation, both sequences can be located in the OEIS; the real component is OEIS A265165, and the imaginary component is OEIS A306183 (alternating at A323620).
Opposite Infinitesimals
If we’re working with exponential-type series, wouldn’t it be nice if, for some ω, x^\omega was an expression in e^x ? This would match to match the relationship between x^\varepsilon and the natural logarithm.
The simplest “solution” is by allowing some algebraic element all of whose Pochhammer symbols (besides the 0th) are itself. In this case, we obtain:
\sum_{r=0}^\infty {(\omega)_r \over r!} (x-1)^r = 1 + \omega \sum_{r=1}^\infty {(x-1)^r \over r!} = 1 + \omega (e^{x-1}-1)
But the representation of \omega is a problem. It must satisfy all of the all of the following equations:
\begin{align*} (\omega)_2 &= (\omega - 1)\omega = \omega^2 - \omega = \omega \\ (\omega)_3 &= (\omega - 2)\omega = \omega^2 - 2\omega = \omega \\ (\omega)_4 &= (\omega - 3)\omega = \omega^2 - 3\omega = \omega \\ ... \\ (\omega)_r &= (\omega - r)\omega = \omega^2 - r\omega = \omega \\[10pt] \implies \omega^2 &= (r + 1)\omega,~~ r \ge 1 \end{align*}
The simplest element that satisfies these relationships is obviously 0. Trying a little harder, it could make sense if \omega is some sort of infinite fixed point, such that multiplying it by any integer (or itself) is just itself. Whatever \omega actually represents, it cannot be captured in a matrix.
Infinitesimal Duality
Of course, there is another infinitesimal that we should not forget to consider: the transpose of \epsilon_2 (denoted hence \varepsilon').
However, ε and ε’ do do not play nice with each other. Notice that \varepsilon + \varepsilon' = j. Is it the case that
x^\varepsilon x^{\varepsilon'} \stackrel{?}{=} x^{\varepsilon + \varepsilon'} = x^j
Surprisingly, no. Starting by converting the left hand side to matrices, we get two matrices which do not commute:
\begin{align*} \begin{pmatrix} 1 & \ln x \\ 0 & 1 \end{pmatrix} \begin{pmatrix} 1 & 0 \\ \ln x & 1 \end{pmatrix} &= \begin{pmatrix} (\ln x)^2 + 1 & \ln x \\ \ln x & 1 \end{pmatrix} \\ = 1 + j\ln x + \begin{pmatrix} (\ln x)^2 & 0 \\ 0 & 0 \end{pmatrix} &\neq 1 + j\ln x + \begin{pmatrix} 0 & 0 \\ 0 & (\ln x)^2 \end{pmatrix} \\ = \begin{pmatrix} 1 & \ln x \\ \ln x & (\ln x)^2 + 1 \end{pmatrix} &= \begin{pmatrix} 1 & 0 \\ \ln x & 1 \end{pmatrix} \begin{pmatrix} 1 & \ln x \\ 0 & 1 \end{pmatrix} \end{align*}
Thus, numbers of the form a + b\varepsilon will not commute with numbers of the form a + b\varepsilon'. Neither of these expressions are x^j.
Note also that \varepsilon = {i + j \over 2}, but i and j do not commute. The placement of the (\ln x)^2 term is the phantom of another matrix k that obeys k^2 = 1,~ k \neq j \neq 1.
\begin{align*} k &:= \begin{pmatrix} 1 & 0 \\ 0 & -1 \end{pmatrix} \\ 1 + j\ln x + \begin{pmatrix} (\ln x)^2 & 0 \\ 0 & 0 \end{pmatrix} &= 1 + j\ln x + (1 + k)\left( {(\ln x)^2 \over 2} \right) \\ 1 + j\ln x + \begin{pmatrix} 0 & 0 \\ 0 & (\ln x)^2 \end{pmatrix} &= 1 + j\ln x + (1 - k)\left( {(\ln x)^2 \over 2} \right) \end{align*}
Skew-Infinitesimals
What about 1 + j (or equivalently 1 + k)? In either case, the result has 0 determinant, and in k’s, it is another kind of infinitesimal along the main diagonal. Its Pochhammer sequence also terminates.
\begin{align*} (j + 1)_1 &= j + 1 \\ (j + 1)_2 &= (j + 1)j = j + 1 \\ (j + 1)_3 &= (j + 1)(j - 1) = 0 \end{align*}
This also makes sense by analyzing x^{j + 1} = x(x^j), showing that the addition of exponents is preserved in some circumstances.
\begin{align*} x(x^j) &= {x^2+1 \over 2} + j{x^2-1 \over 2} \\ x^{1 + j} &= \sum_{n=0}^\infty {(j + 1)_n \over n!}(x - 1)^n \\ &= 1 + (1 + j)(x - 1) + {1 \over 2}(1 + j)(x - 1)^2 \\ &= {(x - 1)^2 \over 2} + x + j\left({(x -1)^2 \over 2} + x - 1 \right) = x(x^j) \end{align*}
Intriguingly, this clears the x from the denominator of x^j, so the result no longer necessitates power series in x. Since it has only even components, this expression also has a square root:
\sqrt{x^{j + 1}} = 1 + {j + 1 \over 2}(x - 1) = {x+1 \over 2} + j{x-1 \over 2}
Closing
It is good to remember that the natural logarithm of x grows slower than any positive power of x. Similarly, infinitesimals can be imagined as smaller than any positive rational number. In some way, taking a number to an infinitesimal power is related to a function that grows slower than all rational powers, albeit only within an infinitesimal “space”.
Generalizing slightly, it becomes obvious that algebraic powers can be assigned a series, which can have an interesting representation in transcendental functions.