It is with tremendous regret to the long-term effects to mathematics that I bring you this news…
A former professor of mine (who I’ve discussed here previously) still to this day does some of the best work of anyone I’ve ever known. When he and I were colleagues (we existed at very-near geographical locations), I didn’t know the capacity of his amazing work, rather I just knew his work was amazing and that he for all intents and purposes should’ve been at a bigger, more prominent school.
Now, I know a bit more.
One of the things for which he was pretty famous (subjectively, natürlich) was proving a 50-ish year old unsolved problem in manifold theory; another of his fortes, though, is in the study of function algebras. That’s where this little journey takes us.
A function algebra is a family of continuous functions defined on a compact set which (i) is closed with respect to pointwise multiplication and addition, (ii) contains the constant functions and separates points of , and (iii) is closed as a subspace of where, here, denotes the space of continuous functions defined on equipped with the sup norm: . Associated to such an is the collection of all nonzero homomorphisms ; one easily verifies that every maximal ideal of is the kernel of some element of and vice versa, whereby the space is called the maximal ideal space associated to . Also:
Definition: A point in is said to be a peak point of provided there exists a function so that and on .
One problem of importance in the realm of function algebras is to characterize with respect to such algebras of . To quote Anderson and Izzo:
A central problem in the subject of uniform algebras is to characterize among the uniform algebras on .
One attempt at satisfying this necessity was the so-called peak point conjecture, which was strongly believed to be true until it was shown to be definitively untrue. The purpose of this entry is to focus a little on topics related thereto including the conjecture itself, the counterexample and its construction, and the related results (including work done by Izzo and various collaborators).
So it’s now creeping into the third (full) week of June. School got out for me during the first (full) week of May. Regardless of how woeful you may consider your abilities in mathematics, I’m sure you can deduce something very clear from these facts:
Summer is about half over.
Generally, that fact in and of itself wouldn’t be too terrible. I mean, big deal: Half the summer’s over, and I’ve been working throughout. How big of a failure can that really be?
In this case, it’s actually a pretty big one.
Despite my having read pretty much nonstop since summer began, I haven’t really made it very far into anything substantial. Compounded onto that is the fact that I’ve had to abandon a handful of reading projects after making what appeared to be pretty not-terrible progress into them because of various hindrances (usually, a lack of requisite background knowledge).
It’s been a pretty frustrating, pretty not successful summer, objectively.
So I was able – fortunately – to wake up early and to do some legit reading, despite having only a handful of sleep hours (4-ish?). That’s a definite positive. Right now, I’m about 30 minutes away from a forced obligation (that’s a definite negative), but I wanted to use the 30 minutes I have to still do something constructive. Rather than spend this time wracking my brain with really difficult, hard-to-understand reading that would leave me mentally exhausted for the aforementioned obligation, I decided to come here and write a little exposition regarding something mathematical.
In particular, I’m going to talk about the so-called Richard’s Paradox (see here).
Of course, the fact that I’m avoiding theoretical math to postpone mental exhaustion while using the time to come here and talk about theoretical math is a bit of a paradox as well, so I’ll basically be expositing, paradoxically, about paradoxes.
You have no idea how much I crack myself up.
The ideology that birthed Richard’s paradox is intimately tied to the idea of metamathematics, that is, the study of metatheories – theories about mathematical theories – using mathematical ideas and quantification. I’m not going to get too deeply involved in the discussion on that particular topic; the interested reader, of course, can scope out more here.
To begin, we let denote the set of nonzero positive integers (aka, the natural numbers) and we investigate the collection of all “formal English language statements of finite length” which define a number of . For example, The first prime number, The smallest perfect number, and The cube of the first odd number larger than five are such statements, as they verbally describe the numbers 2, 6, and 73=343, respectively. On the other hand, statements like The number larger than all other numbers and Scotland is a place I’d like to visit fail to make the list due to the fact that the first doesn’t describe a number in and the second doesn’t describe a number at all. Let denote the collection of all so-called qualifying statements, that is, statements that do describe elements .
Note, first, that the collection is infinite due to the fact that the statements The ith natural number is a qualifying statement for all . It’s also countable: Only a countable number of words exist in the English language, and each statement in consists of a finite union of these countably many words. This fact, along with obvious language considerations, says that can actually be given an ordering.
Indeed, consider a two-part ordering: First, organize the statements in by length so that the shortest statements appear first, and then organize statements of the same length by standard lexicographical (dictionary) ordering. The result is an ordered version of the countably infinite collection which we’ll again denote by .
As of now, almost nothing has been done. Continue reading
I’ve posted before about how easily my sleep can be dominated by math stuff after a hard day or thirty of being cooped up in an office, grinding away at theorems and postulates and proofs with hardly a break in the mix.
Over the summer, the same thing happens after only a medium-hard day or three.
I woke up twice this morning, about two hours apart, and both times I was thinking about a random piece of mathematics not related to anything I’ve been actively studying recently. When I finally awoke a third time – this time, for good – I of course couldn’t remember it at all.
Then, finally, I sat in silence and forced my synapses to make connections they didn’t want to make and eventually, after a solid twenty minutes of mental strain, it all came flooding back in.
This is an exposition about so-called Dynkin (π-λ) Systems and the corresponding Dynkin π-λ Theorem. Feel free to stick around.
Set theory, to me, probably constitutes the foundation of mathematics in a sense stricter than can be claimed by any other subdiscipline. At lots of first tier schools, there are graduate-level courses on set theory; at most other schools, there aren’t. I, personally, experienced my lone formal treatment of the discipline in the from of Math 3040 at Valdosta State University back in Fall 2007, and two things about this course stand out to me.
Firstly, I struggled. Hard. I eventually managed to squeeze out an A by what my professor later told me was a miracle: Basically, I did terribly all semester and then made a 100% on the final by spending the week before memorizing literally every single thing my professor had written on the board throughout. It took going to grad school for me to realize that learning mathematics wasn’t accomplished using that technique.
Secondly, I realize that the class I took was hardly a proper class in set theory. It was more an introduction to higher mathematics, and consisted of only about 3 weeks of formal set theory before we moved on to introductory proof techniques, induction, (semi-)formal logic, etc.
So basically, I’ve never had a course in set theory.
One result of this is my continued inability to know the quote-unquote fundamental set identities. I can usually figure them out with 70-ish percent accuracy, but generally speaking I struggle. If and , then does ? ? ? And what about ? What about ? The variations here are endless and, for some reason, I can never keep those things straight. This is a post to address that.
Before proceeding, note that this post came about because of my randomly pulling Sieradski‘s An Introduction to Topology and Homotopy off of my dusty bookshelf for the first time since – well, since ever. The introduction has a good balance of set theory, advanced calculus of the real line, cardinal and ordinal properties, etc. I think I’m going to give this book a once over during the next few days.
In any event, here are some things that I should work on remembering and so maybe other people out there will care to also. I’m assuming that the knowledge of the basic set operations (union, intersection, difference, product, etc.) are known.
Product Properties Let and . Then in , the following relations hold:
Summary: Products work “intuitively” with most set operations in most cases, although intersection obviously more so than union. Also, item (6) there will probably never stick with me fully.
Image and Pre-image Properties Let be any function. Then for all subsets and ,:
Summary: Unions are more cooperative than are intersections or differences, and inverse images are more intuitive than (forward) images.
So there you have it. Hopefully by typing this out, I can keep it as a piece of data that’s fresh in my mind.