# Being (re)born(-again)

I wasn’t around these parts much yesterday for a number of reasons: I spent the better part of the midday on campus, splitting time between campus errands, meeting with one professor about the stuff he’s working on, and parsing through another professor‘s latest research paper; the times around that were being occupied by birthday things because yesterday was the twenty-seventh annual celebration of my being birthed.

That’s right: I’m 27 now, which means – among other things – that I have fewer than 365 days (I’m not sure exactly how many) to supplant Jean-Pierre Serre as the youngest Fields Medal winner. Not a good feeling for a just-blossoming mathematician. Le sigh.

I guess the good news is that I have 13 years to maybe squeeze one in there. Thirteen years…that’s 4,747 days from today. That makes it seem like I have plenty of time to work!

</forced optimism>

In any case…

The paper I’m reading is on the behavior of M-conformal mappings taking values in a Clifford algebra $\mathcal{C}\ell_{m,n}$ over $\mathbb{R}$. I’m gonna take a second to discuss some of the preliminaries of that topic and to maybe outline the proof of a basic assertion that’s generally considered general knowledge among those in the know. Worth noting is that the background of Clifford Algebras (and hence of the analysis of functions thereon) extends far more deeply than discussed here and so this should in no way be taken as an actual, worthwhile exposition on the ease/difficulty of the topic.

Let $V=\mathbb{R}$ denote the base vector space. The Clifford algebra $\mathcal{C}\ell_{m,n}(V)$ is a paravector space with basis elements $\{1,e_1,\ldots,e_{m-1},e_m,e_{m+1},\ldots,e_{m+n-1}\}$ subject to the identities

$1^2=e_1^2=\cdots= e_{m-1}^2=1$ and $e_m^2=e_{m+1}^2=\cdots=e_{m+n-1}^2=-1$.

For generality, let $e_0=1$. The pair $(m,n)$ is called the signature of the Clifford Algebra, and for signatures of the form $(0,n)$ (which are the focus here), the spaces in question are direct extensions of the construction of $\mathbb{C}$ from $\mathbb{R}$. As such, the basis elements of $\mathcal{C}\ell_{0,n}$ are further characterized by the identity

$e_ie_j+e_je_i=-2\delta_{ij}e_0$ where $\delta_{ij}$ is Kronecker’s delta.

Clearly, then, $\mathcal{C}\ell_{0,n}$ is a paravector space of dimension $2^n$; moreover, there’s an obvious chain of spaces $\mathbb{R}\subset\mathbb{C}\subset\mathbb{H}\subset\mathcal{C}\ell_{0,3}(V)\subset\ldots$ ordered by set inclusion.

Let $\Omega$ be an open subset of $\mathbb{R}^{n+1}$ with a boundary which is (piecewise) smooth and suppose without loss of generality that $\mathbf{0}\in\Omega$. For notational convenience, let $\mathcal{A}_n$ denote the $\mathbb{R}$-linear span of $\{e_0,e_1,\ldots,e_n\}$ where, again, $e_i^2=-1$ for all $i\neq 0$. Let $f:\Omega\to\mathcal{A}_n$ be a function of the form

$f(x)=u_0(x)+\sum_{i=1}^n u_i(x)e_i$ where $u_l\in C^{\infty}$ for all $l$.

Let $D_n$ be the generalized Cauchy-Riemann operator of the form

$D_n=\dfrac{\partial}{\partial x_0}+\sum_{i=1}^n e_i\dfrac{\partial}{\partial x_i}$.

A Clifford-valued function $f:\Omega\to\mathcal{A}_n$ is said to be monogenic if $D_n f=0$ in all of $\Omega$. The pair of equations $D_nf=fD_n=0$ are actually what I spent most of yesterday working on.

Proposition. The pair of equations $D_nf=fD_n=0$ is equivalent to the system of equations

$\left\{\begin{array}{rl}\displaystyle\sum_{l=0}^n\dfrac{\partial u_l}{\partial x_l}=0,&\\[1.5em] \dfrac{\partial u_m}{\partial x_l}-\dfrac{\partial u_l}{\partial x_m}=0 & \text{for }m\neq l,0\leq m,l\leq n. \end{array}\right.$     $(1)$

A version of this statement was given without proof as a single sentence in the middle of a page near the beginning of my professor’s paper and since my goal is to know what he knows, I took out some paper and decided to prove it. My first goal was to consider the above definition of $f$ in terms of $u_i$ and to see how exactly that behaved under the operator $D_n$ (I looked at $D_nf$). In particular, writing $f(x)=\sum_{l=0}^n u_l(x)e_l$, $D_n=\sum_{k=0}^n e_k\left(\partial/\partial x_k\right)$,:

$\begin{array}{rcl}D_n f & = & D_n\left(\displaystyle\sum_{l=0}^n u_l(x)e_l\right) \\[2em] & = & \left(\displaystyle\sum_{k=0}^ne_k\dfrac{\partial}{\partial x_k}\right)\circ\left(\displaystyle\sum_{l=0}^n u_l(x)e_l\right)\\[2em] & = & \left(\displaystyle\sum_{k=0}^ne_k\dfrac{\partial\left(\displaystyle\sum_{l=0}^n u_l(x)e_l\right)}{\partial x_k}\right)\\[3em]& = & \displaystyle\sum_{k=0}^n e_k\sum_{l=0}^n\dfrac{\partial u_l}{\partial x_k}e_l\\[2em] & = & \displaystyle\sum_{k,l} e_k\,e_l\,\dfrac{\partial u_l}{\partial x_k}\,\,\,\,\,\,\,\,\,\,(2)\end{array}$.

With this as some sort of scratch work, a proof sketch can be made.

Proof of the Proposition (Sketch).
$\left(\Longrightarrow\right)$ Suppose $D_nf=fD_n=0$. In particular, the equation in $(2)$ above holds. Notice that the system in $(1)$ consists of two cases, the first of which deals with the “mixed partials” $\partial u_l/\partial x_k$ when $l=k$ and the second with actual mixed partials for $l\neq k$. Unsurprisingly, the proof hinges on the behavior of the “mixed product” $e_k\,e_l$ in cases when $k=l$ and $k\neq l$. One crucial ingredient to this direction is the identity mentioned above regarding the basis elements $e_l$, namely that $e_ke_l+e_le_k=-2\delta_{kl}e_0$. In the case where $k=l$, it follows that $2e_k^2=-2$, i.e. that $e_k^2=-1$; when $k\neq l$, it follows that $e_ke_l=-e_le_k$. Consider breaking $(2)$ further into three cases for $k\lneq l$, $k\gneq l$, and $k=l$:

$D_nf = \displaystyle\sum_{k\lneq l} e_k\,e_l\,\dfrac{\partial u_l}{\partial x_k} + \sum_{k\gneq l}e_k\,e_l\,\dfrac{\partial u_l}{\partial x_k} + \sum_{k=l}e_k\,e_l\,\dfrac{\partial u_l}{\partial x_k}=0\,\,\,\,\,\,\,\,\,\,(3)$.

The mindset here is as follows:

Consider a $(n+1)\times(n+1)$ matrix $M$ whose rows represent the value of $k$ and whose columns represent the values of $l$ and whose entries $(m)_{k,l}$ satisfy $(m)_{k,l}=e_k\,e_l$. $M$ is almost symmetric: Its diagonal entries are all $e_k^2=-1$, and its lower-triangular entries $(m)_{k,l}$, $k\gneq l$, are exactly the negatives of its upper-triangular entries $(m)_{k,l}$, $k\lneq l$. Thus, $(m)_{k,l}=-(m)_{l,k}$ for $k\neq l$. If the matrix $M$ were modified into the $(n+1)\times(n+1)$ matrix $M'$ whose entries are now the values $e_k\,e_l\,(\partial u_l/\partial x_k)$, the entries $(m')_{k,l}$ of $M'$ satisfy the same relation, i.e., $(m')_{k,l}=-(m')_{l,k}$ for all $k\neq l$. Hence, the expression in $(3)$ can be rewritten as follows:

$\begin{array}{rcl}0 & = & D_nf \\ & = & \displaystyle\sum_{k\lneq l} e_k\,e_l\,\dfrac{\partial u_l}{\partial x_k} + \sum_{k\gneq l}e_k\,e_l\,\dfrac{\partial u_l}{\partial x_k} + \sum_{k=l}e_k\,e_l\,\dfrac{\partial u_l}{\partial x_k} \\[2em] & = & \displaystyle\sum_{k\lneq l} e_k\,e_l\,\dfrac{\partial u_l}{\partial x_k} - \sum_{k\gneq l}e_l\,e_k\,\dfrac{\partial u_k}{\partial x_l} - \sum_{l=0}^n\dfrac{\partial u_l}{\partial x_l} \\[2em] & = &\displaystyle\sum_{k\neq l} \left(e_k\,e_l\,\dfrac{\partial u_l}{\partial x_k} - e_l\,e_k\,\dfrac{\partial u_k}{\partial x_l}\right) +\sum_{l=0}^n-\dfrac{\partial u_l}{\partial x_l} \end{array}\,\,\,\,\,\,\,\,\,\,(4)$.

Finally, because of linear independence among the basis vectors $e_l$, the expression in $(4)$ equals 0 precisely when the two sums each equal zero; moreover – again by linear independence of $e_l$ – the first sum for $l\neq k$ equals zero precisely when each of its summands equals zero. Thus, $(4)$ implies $(1)$. Note that a similar argument holds for $fD_n=0$.

$\left(\Longleftarrow\right)$ Suppose the system in $(1)$ holds. Summing the terms when $l\neq k$ and adding the resulting sum to the sum for $l=k$ yields that 0 is equal to precisely the expression in $(4)$. Hence, tracing through the above equalities “backwards” yields that $D_nf=0$, and by modifying appropriately, that $fD_n=0$ as well.   $\square$

Anyway, I just thought this was a nifty little exercise. It took me a little while to figure out whether I believed the proposition, and over an hour to figure out why I should believe it and how I could write out the details.

Suffice it to say, I’m pretty hooked on this Clifford Analysis thing.

Well, it’s now almost 6:30pm (about 5 hours after I started writing this thing thanks to lots of having interruptions + baby duty in the middle), so I should probably try to get through some new stuff. Maybe I’ll spend some time in this differential algebra text…

…maybe….

Until next time….

Advertisements

## 4 thoughts on “Being (re)born(-again)”

1. Pingback: Realization | riemannian hunger

2. Muphrid

I’m not entirely familiar with your motivation (having come across this post from elsewhere). It sounds like from the paper you mention they were more interested in spaces that directly and obviously extend the complex plane, and so they chose a (0,n) signature, but I think this can be very misleading. A (2,0) signature space adequately captures all the relevant properties of the complex plane, with the benefit that the “Cauchy-Riemann operator” is just the vector derivative (del, nabla, whatever you want to call it). Similarly, quaternions end up appearing naturlaly in a (3,0) signature space. Complex structure need not be assumed to arise by using a mixed (or entirely negative) signature space.