# Category Theory: Moving Up, Out

I remember my very first encounter with Category Theory.

I was in my fourth semester (a spring semester) as a master’s student: I had passed my two mandatory abstract algebra classes my first two semesters there and had passed my comprehensive exams during the Fall (my third semester). As was custom, then, I spent my third and fourth semesters taking random “advanced topics” courses aimed at potential doctoral students, and one of the sequences I took was the algebra sequence.

My first semester doctoral-level (or 7000-level as was colloquial there) algebra class was over the classification of finite simple groups and was by far the most difficult class I’d ever taken at the time. Apparently, being a student who doesn’t remotely have the sufficient background and being in a class run by a professor who has unimaginably-greater background – who teaches as if the audience consists of peers – makes for a difficult time. I squeaked out an A.

In the second semester of 7000-algebra, however, things were far less directed. Long story short, it was a potpurri of material, some from algebraic topology, some from homological algebra, and some – about 1/3 of the course, I’d say – from category theory. That was my very first exposure to an area I didn’t otherwise know existed and I remember thinking, This is the most abstract thing that’s ever been devised, and also, There’s no way this will ever be far-reaching outside the realm of mathematics.

I’ve since realized that the first assertion isn’t really true – unsurprisingly since my exposure to other areas has increased drastically since leaving there – but apparently, the second one isn’t either. To be more precise, I stumbled upon this article online which describes a number of non-math areas that have been benefiting – and will continue to benefit – from the use of category theoretic ideas.

It’s really quite amazing to see, but in and of itself is unsurprising given the fact that category theory itself was devised to provide unity among the wide variety of subdisciplines of mathematics. As a pure mathematician, I always tried to find a balance between being interested in too broad a range of topics and being too narrow with my scope; the spread of category theory invites us all to analyze that aspect of ourselves. To borrow a quote from David Spivak’s exposition (available on the arXiv),:

It is often useful to focus ones study by viewing an individual thing, or a group of things, as though it exists in isolation. However, the ability to rigorously change our point of view, seeing our object of study in a diﬀerent context, often yields unexpected insights. Moreover this ability to change perspective is indispensable for eﬀectively communicating with and learning from others. It is the relationships between things, rather than the things in and by themselves, that are responsible for generating the rich variety of phenomena we observe in the physical, informational, and mathematical worlds.

Here’s to you, category theory!

# Study plans, or Why it’s embarrassingly late into the summer and I still haven’t finalized a good way to learn mathematics

So it’s now creeping into the third (full) week of June. School got out for me during the first (full) week of May. Regardless of how woeful you may consider your abilities in mathematics, I’m sure you can deduce something very clear from these facts:

Generally, that fact in and of itself wouldn’t be too terrible. I mean, big deal: Half the summer’s over, and I’ve been working throughout. How big of a failure can that really be?

In this case, it’s actually a pretty big one.

Despite my having read pretty much nonstop since summer began, I haven’t really made it very far into anything substantial. Compounded onto that is the fact that I’ve had to abandon a handful of reading projects after making what appeared to be pretty not-terrible progress into them because of various hindrances (usually, a lack of requisite background knowledge).

It’s been a pretty frustrating, pretty not successful summer, objectively.

# The Half-Week That Never Was

As I type this, it’s 2:45am on a Wednesday. I haven’t been around these parts since Sunday night (actually, 3:30am Monday morning), so one would think I’d have accumulated a ginormous list of professional doings to post proudly about here.

I regret to inform: That is not the case.

# Sunday Summary

My mathematizing wasn’t very impressive today. I:

1. Read some pages on Gröbner bases in Dummit and Foote.
2. Did some tutoring / tutor-related things.
3. Spent some time figuring out some solutions from Hatcher.

Despite it being 3:30am, my game plan is to be up around 8am to make a trip to campus. While there, I plan to do the usual errand things, and to then lock myself in my office for 5-or-so hours and do some legit math things.

That means I need to take good resources there with me.

Peace.

# Verifying Easy Properties, or Nowhere, Going Nowhere

Whenever I decide to learn something – and especially when it’s learning for learning’s sake – I make sure to be meticulous with things. In particular, whenever I see propositions stated without proof, I break out the old pen and paper and start verifying.

The purpose of this post is to examine a few of the properties on page 661 of Dummit and Foote. Some of the background notation needed is discussed in this previous entry.

Claim. The following properties of the map $\mathcal{I}$ are very easy exercises. Let $A$ and $B$ be subsets of $\mathbb{A}^n$.
(7) $\mathcal{I}(A\cup B)=\mathcal{I}(A)\cap\mathcal{I}(B)$.
(9) If $A$ is any subset of $\mathbb{A}^n$, then $A\subseteq\mathcal{Z}(\mathcal{I}(A))$, and if $I$ is any ideal, then $I\subseteq\mathcal{I}(\mathcal{Z}(I))$.
(10) If $V=\mathcal{Z}(I)$ is an affine algebraic set then $V=\mathcal{Z}(\mathcal{I}(V))$, and if $I=\mathcal{I}(A)$ then $\mathcal{I}(\mathcal{Z}(I))=I$, i.e. $\mathcal{Z}(\mathcal{I}(\mathcal{Z}(I)))=\mathcal{Z}(I)$ and $\mathcal{I}(\mathcal{Z}(\mathcal{I}(A)))=\mathcal{I}(A)$.

Proof. (7) Note that $A\cup B= (A\setminus B)\sqcup (B\setminus A)\sqcup (A\cap B)$. In particular, a polynomial $f$ vanishes on $A\cup B$ if and only if it vanishes on each disjoint component of $(A\setminus B)\sqcup (B\setminus A)\sqcup (A\cap B)$ separately, which happens if and only if it vanishes on the entirety of $A$ and on the entirety of $B$.

(9) Suppose $A\subset\mathbb{A}^n$. Clearly, any polynomial $f$ which vanishes on $A$ is in $\mathcal{I}(A)$, and because $f\in\mathcal{I}(A)$ if and only if $f\equiv 0$ on $A$, $A$ is certainly contained in the zero set of $f$. Therefore, $A\subseteq\mathcal{Z}(\mathcal{I}(A))$. Note that the inclusion doesn’t necessarily reverse since $f(x_1,\ldots,x_n)$ may vanish for some element $(x_1,\ldots,x_n)\not\in A$.

Next, suppose that $I$ is an ideal of the ring $k[x_1,\ldots,x_n]$ and that $f\in I$. By definition, the locus $\mathcal{Z}(I)\subset\mathbb{A}^n$ is the collection of all points $(a_1,\ldots,a_n)\in\mathbb{A}^n$ for which $f(a_1,\ldots,a_n)=0$ for all $f\in \mathcal{I}$. Then, certainly, for all $a=(a_1,\ldots,a_n)\in \mathcal{Z}(I)$, $f(a)=0$, i.e., $f$ is an element in the ideal $\mathcal{I}(\mathcal{Z}(I))$ that vanishes on $\mathcal{Z}(I)$. Hence, $f\in I$ implies that $I\subseteq \mathcal{I}(\mathcal{Z}(I))$.

(10) First, suppose that $I$ is an ideal and that $V=\mathcal{Z}(I)$ is an affine algebraic set. It suffices to show that $V=\mathcal{Z}(\mathcal{I}(V))$ by way of two-sided inclusion. To that end, let $a=(a_1,\ldots,a_n)\in V$. Then $a$ is in the zero-set of the ideal $I$, whereby it follows that $f(a)=0$ for all $f\in I$. But for any $f$ satisfying $f(a)=0$ for arbitrary $a\in V$, $f\in\mathcal{I}(V)$ and $a\in\mathcal{Z}(\mathcal{I}(V))$ by definition. Hence, $V\subset\mathcal{Z}(\mathcal{I}(V))$.

Conversely, if $a\in\mathcal{Z}(\mathcal{I}(V))$, then $f(a)=0$ for all $f\in\mathcal{I}(V)$. But all such functions $f$ disappear for all values $v\in V=\mathcal{Z}(I)$ by definition of $\mathcal{I}(V)$. This means that the polynomials $f\in\mathcal{I}(V)$ for which $f(a)=0$ are precisely the functions which satisfy $f(v)=0$ for all $v\in{Z}(I)$. Hence, $a$ itself must be an element of $\mathcal{Z}(I)=V$, whereby the equality is proved.

The other expression is proved similarly and is omitted for brevity. Therefore, as claimed,

$\mathcal{Z}(\mathcal{I}(\mathcal{Z}(I)))=\mathcal{Z}(I)$ and $\mathcal{I}(\mathcal{Z}(\mathcal{I}(A)))=\mathcal{I}(A)$,

from which it follows that the maps $\mathcal{Z}$ and $\mathcal{I}$ are inverses of one another under the construction given here.    $\square$

# Dummit and Foote Example

I found one particular example – namely Example 2 on page 660 of Dummit and Foote – to be a good exercise in the sense that it tied together a lot of ideas from earlier parts of the book. In order to share, though, I need to give a little background.

Throughout, let $k$ denote a field. The affine $n$-space $\mathbb{A}^n$ over $k$ is the set of all $n$-tuples $(k_1,k_2,\ldots,k_n)$ where $k_i\in k$ for all $i$. For a general subset $A\subset \mathbb{A}^n$, the ideal $\mathcal{I}(A)$ is called the ideal of functions vanishing at $A$ and is defined to be

$\mathcal{I}(A)=\{f\in k[x_1,\ldots,x_n]\,:\,f(a_1,\ldots,a_n)=0\text{ for all }(a_1,\ldots,a_n)\in A\}$.

It’s easily verified that $\mathcal{I}(A)$ is indeed an ideal and that it is, by definition, the unique largest ideal of functions which are identically zero on all of $A$. With that in mind, consider the aforementioned example:

Page 660, Example 2. Over any field $k$, the ideal of functions vanishing at $(a_1,\ldots,a_n)\in\mathbb{A}^n$ is a maximal ideal since it is the kernel of a surjective (ring) homomorphism from $k[x_1,\ldots,x_n]$ to the field $k$ given by evaluation at $(a_1,\ldots,a_n)$. It follows that

$\mathcal{I}((a_1,\ldots,a_n))=(x_1-a_1,x_2-a_2,\ldots,x_n-a_n)$.

Proof. As mentioned above, $\mathcal{I}((a_1,\ldots,a_n))$ is certainly an ideal. To show that it’s a maximal ideal in $k[x_1,\ldots,x_n]$, first define $\varphi:k[x_1,\ldots,x_n]\to k$ by the action that sends $f=f(x_1,x_2,\ldots,x_n)$ to the constant $f(a_1,a_2,\ldots,a_n)\in k$. Verifying that this is a ring homomorphism is trivial, and given an element $x$, $x=f(x,0,\ldots,0)$ where $f(x_1,\ldots,x_n)=x_1$ is a polynomial in $k[x_1,\ldots,x_n]$. Moreover, the kernel of $\varphi$ consists precisely of those elements in $f\in k[x_1,\ldots,x_n]$ for which $f(a_1,\ldots,a_n)=0$, whereby it follows that $\ker(\varphi)=\mathcal{I}(A)$ where $A=\{(a_1,\ldots,a_n)\}\subset\mathbb{A}^n$.

Hence, $\varphi$ is a surjective ring homomorphism with kernel $\mathcal{I}(A)$. In particular, by the first (ring) isomorphism theorem, $k[x_1,\ldots,x_n]/\mathcal{I}(A)\cong\text{Im}(\varphi)$ where $\text{Im}(\varphi)=k$ by surjectivity of $\varphi$. Clearly, then, $k[x_1,\ldots,x_n]/\mathcal{I}(A)$ is a field, something that happens if and only if the ideal $\mathcal{I}(A)$ is maximal. Therefore, the first claim is proved.

For the second claim, note that the ideal $(x_1-a_1,\ldots,x_n-a_n)$ is an ideal that vanishes precisely on $A$. Clearly, then, $\mathcal{I}(A)\supseteq (x_1-a_1,\ldots,x_n-a_n)$. On the other hand, an element $f\in\mathcal{I}(A)$ is an element that vanishes at the point $(a_1,\ldots,a_n)$, whereby it follows that

$\displaystyle f(x_1,\ldots,x_n)=g(x_1,\ldots,x_n)\prod_{i=1}^n(x_i-a_i)^{\alpha_i}$

for some positive powers $\alpha_i$. Then, clearly, $f\in (x_1-a_1,\ldots,x_n-a_n)$ and so $\mathcal{I}(A)\subseteq (x_1-a_1,\ldots,x_n-a_n)$. This concludes the proof.   $\square$

# Abstract Algebra Quirks

One of the things that has always driven my interest in Abstract Algebra as a field is the perceived multitude of quirks. It’s hard to speak of what I mean from a meta perspective, so I’ll just give an example.

Let $k$ be a field. Recall that a ring $R$ is called a $k$-algebra if $k\subset Z(R)$ and if $1_k=1_R$, where $Z(R)$ is the center of $R$, that is, the elements $x\in R$ for which $xy=yx$ for all $y\in R$. From an algebra perspective, any ring $R$ which is a $k$-algebra is, necessarily, both a ring and a vector space over $k$. For this reason, when speaking of $R$ being generated by a subset of elements of $R$, it’s necessary to indicate with which regard the subset generates $R$.

One cool example of this necessity – an example which I find quirky, in some regards – comes in the form of the polynomial ring $R=k[x]$ in one variable over $k$. Certainly with this construction, $R$ is a $k$-algebra, and so $R$ is both a ring and a vector space over $k$. Note, then, that as a ring, $R$ is a a finite-dimensional $k$-algebra since $x$ is a ring generator for $R$ – that is, $R=k[x]$ is the smallest ring over $k$ containing $x$. On the other hand, $k[x]$ has basis $1,x,x^2,\ldots$ as a vector space over $k$ and hence is infinite dimensional as a $k$-vector space.

Long story short: Two different structural existences for a single object, and the two are, to some extent, polar opposites as one another.

Things like this always make me realize why algebra is such a necessary and beautiful field.