I wasn’t around these parts much yesterday for a number of reasons: I spent the better part of the midday on campus, splitting time between campus errands, meeting with one professor about the stuff he’s working on, and parsing through another professor‘s latest research paper; the times around that were being occupied by birthday things because yesterday was the twenty-seventh annual celebration of my being birthed.
That’s right: I’m 27 now, which means – among other things – that I have fewer than 365 days (I’m not sure exactly how many) to supplant Jean-Pierre Serre as the youngest Fields Medal winner. Not a good feeling for a just-blossoming mathematician. Le sigh.
I guess the good news is that I have 13 years to maybe squeeze one in there. Thirteen years…that’s 4,747 days from today. That makes it seem like I have plenty of time to work!
In any case…
The paper I’m reading is on the behavior of M-conformal mappings taking values in a Clifford algebra over . I’m gonna take a second to discuss some of the preliminaries of that topic and to maybe outline the proof of a basic assertion that’s generally considered general knowledge among those in the know. Worth noting is that the background of Clifford Algebras (and hence of the analysis of functions thereon) extends far more deeply than discussed here and so this should in no way be taken as an actual, worthwhile exposition on the ease/difficulty of the topic.
Let denote the base vector space. The Clifford algebra is a paravector space with basis elements subject to the identities
For generality, let . The pair is called the signature of the Clifford Algebra, and for signatures of the form (which are the focus here), the spaces in question are direct extensions of the construction of from . As such, the basis elements of are further characterized by the identity
where is Kronecker’s delta.
Clearly, then, is a paravector space of dimension ; moreover, there’s an obvious chain of spaces ordered by set inclusion.
Let be an open subset of with a boundary which is (piecewise) smooth and suppose without loss of generality that . For notational convenience, let denote the -linear span of where, again, for all . Let be a function of the form
where for all .
Let be the generalized Cauchy-Riemann operator of the form
A Clifford-valued function is said to be monogenic if in all of . The pair of equations are actually what I spent most of yesterday working on.
Proposition. The pair of equations is equivalent to the system of equations
A version of this statement was given without proof as a single sentence in the middle of a page near the beginning of my professor’s paper and since my goal is to know what he knows, I took out some paper and decided to prove it. My first goal was to consider the above definition of in terms of and to see how exactly that behaved under the operator (I looked at ). In particular, writing , ,:
With this as some sort of scratch work, a proof sketch can be made.
Proof of the Proposition (Sketch).
Suppose . In particular, the equation in above holds. Notice that the system in consists of two cases, the first of which deals with the “mixed partials” when and the second with actual mixed partials for . Unsurprisingly, the proof hinges on the behavior of the “mixed product” in cases when and . One crucial ingredient to this direction is the identity mentioned above regarding the basis elements , namely that . In the case where , it follows that , i.e. that ; when , it follows that . Consider breaking further into three cases for , , and :
The mindset here is as follows:
Consider a matrix whose rows represent the value of and whose columns represent the values of and whose entries satisfy . is almost symmetric: Its diagonal entries are all , and its lower-triangular entries , , are exactly the negatives of its upper-triangular entries , . Thus, for . If the matrix were modified into the matrix whose entries are now the values , the entries of satisfy the same relation, i.e., for all . Hence, the expression in can be rewritten as follows:
Finally, because of linear independence among the basis vectors , the expression in equals 0 precisely when the two sums each equal zero; moreover – again by linear independence of – the first sum for equals zero precisely when each of its summands equals zero. Thus, implies . Note that a similar argument holds for .
Suppose the system in holds. Summing the terms when and adding the resulting sum to the sum for yields that 0 is equal to precisely the expression in . Hence, tracing through the above equalities “backwards” yields that , and by modifying appropriately, that as well.
Anyway, I just thought this was a nifty little exercise. It took me a little while to figure out whether I believed the proposition, and over an hour to figure out why I should believe it and how I could write out the details.
Suffice it to say, I’m pretty hooked on this Clifford Analysis thing.
Well, it’s now almost 6:30pm (about 5 hours after I started writing this thing thanks to lots of having interruptions + baby duty in the middle), so I should probably try to get through some new stuff. Maybe I’ll spend some time in this differential algebra text…
Until next time….