It is with tremendous regret to the long-term effects to mathematics that I bring you this news…
A former professor of mine (who I’ve discussed here previously) still to this day does some of the best work of anyone I’ve ever known. When he and I were colleagues (we existed at very-near geographical locations), I didn’t know the capacity of his amazing work, rather I just knew his work was amazing and that he for all intents and purposes should’ve been at a bigger, more prominent school.
Now, I know a bit more.
One of the things for which he was pretty famous (subjectively, natürlich) was proving a 50-ish year old unsolved problem in manifold theory; another of his fortes, though, is in the study of function algebras. That’s where this little journey takes us.
A function algebra is a family of continuous functions defined on a compact set which (i) is closed with respect to pointwise multiplication and addition, (ii) contains the constant functions and separates points of , and (iii) is closed as a subspace of where, here, denotes the space of continuous functions defined on equipped with the sup norm: . Associated to such an is the collection of all nonzero homomorphisms ; one easily verifies that every maximal ideal of is the kernel of some element of and vice versa, whereby the space is called the maximal ideal space associated to . Also:
Definition: A point in is said to be a peak point of provided there exists a function so that and on .
One problem of importance in the realm of function algebras is to characterize with respect to such algebras of . To quote Anderson and Izzo:
A central problem in the subject of uniform algebras is to characterize among the uniform algebras on .
One attempt at satisfying this necessity was the so-called peak point conjecture, which was strongly believed to be true until it was shown to be definitively untrue. The purpose of this entry is to focus a little on topics related thereto including the conjecture itself, the counterexample and its construction, and the related results (including work done by Izzo and various collaborators).
It’s been nearly a week since I’ve been around these parts in any substantial regard whatsoever. Truthfully, the (large) amount of hectic things have been combined with the (small) amount of extracurricular math I’ve done has left me very little in the way of substance to share. I’m hoping that changes soon. In the meantime, here’s a list of what’s been going down with me.
- School started back on Monday (the 24th). I take one class five days a week, I sit in on one lecture as a TA (about 90 minutes a week), and I proctor labs for four hours per week. I also have two office hours per week.
- I interviewed and was accepted for a research internship at Wolfram. I’ll be a contributing member of the Wolfram|Alpha team and my supervisor will be the one and only Eric W. Weisstein. I’m beyond pumped about this.
- I interviewed and was accepted for a tutor-type position with Pearson, as well. This, along with the Wolfram job, my FSU TA position, and my (irregular) tutoring with Tutor.com means I officially have four jobs simultaneously. Maybe you can see now why I haven’t been around much?
- I’ve spent a lot of time (or what seems like it) doing homework for the Foundations of Math class I’m taking. We have homework due every single day. C’est la vie.
- I’ve been preparing grant application materials in (relative) secrecy for a while now. I think I may be winding down on that front. I’m hoping it pays dividends; I could really use the extra research time in the ’13-’14 year(s).
So you see, some how, things have been really slow and really hectic all at the same time. I have no complaints, overall, except that I wish I’d been able to maintain my pretty-active summer research schedule for longer. I’m hoping to fall into some kind of routine with all the jobs, etc., so that everything can work in harmony. I guess if worse comes to worst, I miss out on the remaining month-ish of free-range math research and focus primarily on learning for the class I’m taking + the research for Wolfram.
And that, my friends, is one of the most favorable version of between a rock and a hard place any mathematician has ever been in.
Life really is good.
Until next time….
A few days ago, I posted about a conversation I had with my friend L. We spent some time catching up and, in so doing, spent a little time talking about this particular plot of space on the grand ol’ internet. He mentioned a couple blog topics for me to consider and also asked if I was contemplating research in algebra/topology; looking back, the fact that L’s an analyst, the fact that I have very few analysis posts here, and the fact that the topics he suggested were analysis topics made me realize I really do need to do a better job representing my enjoyment for analysis. Consider this entry step one of that, perhaps.
Rather than spending a bunch of time researching stuff I’d never seen before, I decided to type up a little summary thing of an interesting article I found online when I was a master’s student. For a little perspective as to why this particular article is important, we’ll have to take a trip into so-called higher education and examine the topic that generally serves as most people’s introduction to grown up mathematics, i.e., calculus. A (really really over-simplified, primitive, simplistic) synopsis of calculus can be summed up in this way: Calculus is a class that abstracts the unknown variable quantities thrown at you in Algebra I/II into unknown variable quantities that themselves can vary by way of limiting arguments.
And that’s basically it: In America, calculus is really just taught as the algebra of limits. As such, some basic limit-intrinsic notions such as continuity, differentiability, and integrability are touched on / hinted at, and at the end of fourteen weeks of being fooled into thinking you’re finally understanding what math is, you’re sent on your way. For most, that’s the end of the story, but for a self-selecting few, the journey through mathematics continues, and new techniques / ideas get thrown at you in hopes that they’ll stick and that you’ll be able to use them for something special….
…and at the same time, for that self-selecting few, it’s not uncommon at all for certain somewhat obvious questions to go unasked through the years. For example: It’s invariably shown in Calculus I that fails to be differentiable at because of the sharp edge there. It stands to reason, then, that combinations and scalings of the absolute value function with two, three, four, etc. sharp edges would fail to be differentiable at two, three, four, etc. values of . This idea isn’t a hard one to grasp for a calculus student. But then the next question: How many points of non-differentiability can a function have? Or how about, Construct a function that fails to be differentiable at infinitely many points. Most students would be quick to adapt previous examples and notice that a saw-blade function with sharp points at each value , , proves the existence of functions with infinitely many points of discontinuity. Again, no big deal.
So what, then? Can we have functions that are non-differentiable at uncountably many points? How about functions that are differentiable nowhere? By and large, these are ideas that escape lots of students – even students nearing the end of a traditional math major curriculum at an average American institution. I know this because I was once one of those students, and have since taught several myself: I see how students fail to comprehend non-differentiability and even the everywhere-discontinuity of functions like . It’s simply something that fails to register for the average student.
Coincidentally, it doesn’t always stop there. L was actually telling me a story once about a statistics professor we both knew who claimed, absent-mindedly, that most continuous functions are differentiable. That, of course, is a big statement, and for the inquisitive audience-member, the natural response is: Prove it. Hence the aforementioned paper…. Continue reading
So I was able – fortunately – to wake up early and to do some legit reading, despite having only a handful of sleep hours (4-ish?). That’s a definite positive. Right now, I’m about 30 minutes away from a forced obligation (that’s a definite negative), but I wanted to use the 30 minutes I have to still do something constructive. Rather than spend this time wracking my brain with really difficult, hard-to-understand reading that would leave me mentally exhausted for the aforementioned obligation, I decided to come here and write a little exposition regarding something mathematical.
In particular, I’m going to talk about the so-called Richard’s Paradox (see here).
Of course, the fact that I’m avoiding theoretical math to postpone mental exhaustion while using the time to come here and talk about theoretical math is a bit of a paradox as well, so I’ll basically be expositing, paradoxically, about paradoxes.
You have no idea how much I crack myself up.
The ideology that birthed Richard’s paradox is intimately tied to the idea of metamathematics, that is, the study of metatheories – theories about mathematical theories – using mathematical ideas and quantification. I’m not going to get too deeply involved in the discussion on that particular topic; the interested reader, of course, can scope out more here.
To begin, we let denote the set of nonzero positive integers (aka, the natural numbers) and we investigate the collection of all “formal English language statements of finite length” which define a number of . For example, The first prime number, The smallest perfect number, and The cube of the first odd number larger than five are such statements, as they verbally describe the numbers 2, 6, and 73=343, respectively. On the other hand, statements like The number larger than all other numbers and Scotland is a place I’d like to visit fail to make the list due to the fact that the first doesn’t describe a number in and the second doesn’t describe a number at all. Let denote the collection of all so-called qualifying statements, that is, statements that do describe elements .
Note, first, that the collection is infinite due to the fact that the statements The ith natural number is a qualifying statement for all . It’s also countable: Only a countable number of words exist in the English language, and each statement in consists of a finite union of these countably many words. This fact, along with obvious language considerations, says that can actually be given an ordering.
Indeed, consider a two-part ordering: First, organize the statements in by length so that the shortest statements appear first, and then organize statements of the same length by standard lexicographical (dictionary) ordering. The result is an ordered version of the countably infinite collection which we’ll again denote by .
As of now, almost nothing has been done. Continue reading
I’ve posted before about how easily my sleep can be dominated by math stuff after a hard day or thirty of being cooped up in an office, grinding away at theorems and postulates and proofs with hardly a break in the mix.
Over the summer, the same thing happens after only a medium-hard day or three.
I woke up twice this morning, about two hours apart, and both times I was thinking about a random piece of mathematics not related to anything I’ve been actively studying recently. When I finally awoke a third time – this time, for good – I of course couldn’t remember it at all.
Then, finally, I sat in silence and forced my synapses to make connections they didn’t want to make and eventually, after a solid twenty minutes of mental strain, it all came flooding back in.
This is an exposition about so-called Dynkin (π-λ) Systems and the corresponding Dynkin π-λ Theorem. Feel free to stick around.
Set theory, to me, probably constitutes the foundation of mathematics in a sense stricter than can be claimed by any other subdiscipline. At lots of first tier schools, there are graduate-level courses on set theory; at most other schools, there aren’t. I, personally, experienced my lone formal treatment of the discipline in the from of Math 3040 at Valdosta State University back in Fall 2007, and two things about this course stand out to me.
Firstly, I struggled. Hard. I eventually managed to squeeze out an A by what my professor later told me was a miracle: Basically, I did terribly all semester and then made a 100% on the final by spending the week before memorizing literally every single thing my professor had written on the board throughout. It took going to grad school for me to realize that learning mathematics wasn’t accomplished using that technique.
Secondly, I realize that the class I took was hardly a proper class in set theory. It was more an introduction to higher mathematics, and consisted of only about 3 weeks of formal set theory before we moved on to introductory proof techniques, induction, (semi-)formal logic, etc.
So basically, I’ve never had a course in set theory.
One result of this is my continued inability to know the quote-unquote fundamental set identities. I can usually figure them out with 70-ish percent accuracy, but generally speaking I struggle. If and , then does ? ? ? And what about ? What about ? The variations here are endless and, for some reason, I can never keep those things straight. This is a post to address that.
Before proceeding, note that this post came about because of my randomly pulling Sieradski‘s An Introduction to Topology and Homotopy off of my dusty bookshelf for the first time since – well, since ever. The introduction has a good balance of set theory, advanced calculus of the real line, cardinal and ordinal properties, etc. I think I’m going to give this book a once over during the next few days.
In any event, here are some things that I should work on remembering and so maybe other people out there will care to also. I’m assuming that the knowledge of the basic set operations (union, intersection, difference, product, etc.) are known.
Product Properties Let and . Then in , the following relations hold:
Summary: Products work “intuitively” with most set operations in most cases, although intersection obviously more so than union. Also, item (6) there will probably never stick with me fully.
Image and Pre-image Properties Let be any function. Then for all subsets and ,:
Summary: Unions are more cooperative than are intersections or differences, and inverse images are more intuitive than (forward) images.
So there you have it. Hopefully by typing this out, I can keep it as a piece of data that’s fresh in my mind.