June 21st
Today I learned a proof of Nakayama's lemma as a corollary of the determinant trick. Here is the determinant trick.
Proposition. Fix $R$ a ring and $M$ an $R$-module, where $M$ is generated by $n\in\NN$ elements. Now suppose $\varphi:M\to M$ is an $R$-module homomorphism such that $\varphi(M)\subseteq IM$ for an ideal $I\subseteq R.$ Then we can write \[\varphi^n+\sum_{k=0}^{n-1}\mu_{r_k}\varphi^k=0,\] where $r_k\in I.$
Very quickly, we recall that $\mu_{r_k}:M\to M$ is the $R$-module homomoprhism induced by $\mu_{r_k}(m):=r_km.$ When we proved the Cayley-Hamtilton theorem, we showed that $\mu_\bullet(R)[\varphi]$ is a commutative ring.
Anyways, this is proved in pretty much the same way we proved the Cayley-Hamilton theorem. Instead of using the basis vectors $\{e_k\}_{k=1}^n$ of $R^n,$ we pick up generators $\{m_k\}_{k=1}^n$ of $M.$ Our next task is defining $\Delta$ as in Cayley-Hamtilton theorem, then we'll show $\det\Delta=0,$ and this determinant expansion will expand in the desired identity.
Running things in the fast lane a bit, we don't have a matrix to define a $\Delta,$ but we can use the evidence that $\det\Delta=0$ to define $\Delta,$ a bit backwards. Indeed, we write, for any generator $m_\ell,$\[\varphi(m_\ell)=\sum_{k=1}^na_{k,\ell}m_k\]for some $a_{k,\ell}\in R$; in fact, we may restrict $a_{k,\ell}\in I$ because $\varphi(m_\ell)\in IM.$ Attempting to move everything over to the left-hand side as in our proof of Cayley-Hamilton, we write $\varphi m_\ell=\sum_{k=1}^n\varphi1_{k=\ell}m_k$ and $a_{k,\ell}m_k=\mu_{a_{k,\ell}}m_k$ so that\[\sum_{k=1}^n\left(\varphi1_{k=\ell}-\mu_{a_{k,\ell}}\right)m_k=0.\]Last time, we found that $\Delta_{k,\ell}:=\varphi1_{k=\ell}-\mu_{a_{k,\ell}},$ but here we will just define it this way. Here, $\Delta\in(\mu_\bullet(R)[\varphi])^{n\times n},$ and in fact $\Delta\in(\mu_\bullet(I)[\varphi])^{n\times n}.$
From here, the conversion from evidence to statement is the same.
Lemma. Fix everything as above. Then $\det\Delta\in\mu_\bullet(R)[\varphi]$ is the zero endomorphism.
Fixing an index $j,$ we show that each generator $m_j$ vanishes under $\det\Delta.$ Indeed, let $\op{adj}\Delta$ be the adjugate matrix so that $\Delta\cdot\op{adj}\Delta=(\det\Delta)I_n.$ Continuing, we sum our evidence over $\ell$ like a matrix multiplication as\[\sum_{\ell=1}^n\Bigg((\op{adj}\Delta)_{\ell,j}\underbrace{\sum_{k=1}^n\Delta_{k,\ell}m_k}_0\Bigg)=0.\]Actually making this a matrix multiplication turns this into\[0=\sum_{k=1}^n\left(\sum_{\ell=1}^n\Delta_{k,\ell}(\op{adj}\Delta)_{\ell,j}\right)m_k=\sum_{k=1}^n(\Delta\cdot\op{adj}\Delta)_{k,j}m_k.\]Now, $\Delta\cdot\op{adj}\Delta=(\det\Delta)I_n$ means that $(\Delta\cdot\op{adj}\Delta)_{k,j}=(\det\Delta)1_{k=j}.$ In particular, the above summands vanish except when $k=j,$ collapsing this to $(\det\Delta)m_j=0.$
To finish off showing $\det\Delta$ is the zero endomorphism, we note that any $m\in M$ can be written as $\sum_{k=1}^nr_km_k$ for some $\{r_k\}_{k=1}^n\subseteq R.$ Then\[(\det\Delta)(m)=\sum_{k=1}^nr_k(\det\Delta)m_k=0\]by linearity. This finishes the proof. $\blacksquare$
It remains to expand $\det\Delta.$ Doing this by brute force, we write\[\det\Delta=\sum_{\sigma\in S_n}\op{sgn}\sigma\prod_{k=1}^n\Delta_{k,\sigma(k)}.\]Because each of the entries $\Delta_{k,\sigma(k)}$ lives in $\mu_\bullet(I)[\varphi],$ and $\mu_\bullet(I)[\varphi]$ is a commutative ring—indeed, $I\subseteq R$ is a subring and induces its own action on $M$—the above expansion shows provides a polynomial in $\mu_\bullet(I)[\varphi]$ which vanishes.
Further, a term of maximum degree in the expansion $\det\Delta$ will feature a $\varphi$ in each term, and because $\varphi$ only appears when $k=\sigma(k),$ a term of maximum degree must come from $\sigma=\op{id}$ and is so in fact unique. This term is\[\op{sgn}\sigma\prod_{k=1}^n\Delta_{k,\sigma(k)}=+\prod_{k=1}^n(\varphi-\mu_{a_{k,\ell}}).\]We have written this out entirely to say that the leading term of our polynomial $\det\Delta$ is $1\varphi^n.$ Thus, we have a monic polynomial in $\mu_\bullet(I)[\varphi]$ which vanishes, which is exactly what we wanted. $\blacksquare$
We remark quickly that we can strengthen this statement some with little effort. Indeed, in our expansion of $\det\Delta,$ a term with degree $k$ will come from choosing a $\sigma\in S_n$ with at least $k$ fixed points (these are the only way to have a $\varphi$) and then choosing exactly $k$ of the available $\varphi$s from the resulting product\[\op{sgn}\sigma\prod_{k=1}^n(\varphi1_{k=\sigma(k)}-\mu_{a_{k,\sigma(k)}}).\]The point of saying this is that choosing $k$ of the $\varphi$s means that the coefficient is a product of $n-k$ elements in $I.$ That is, this coefficient lies in the ideal $I^{n-k}.$ Adding up all of the $\varphi^k$ terms means that its coefficient lives in $I^{n-k}.$ So we have the following.
Proposition. Fix $R$ a ring and $M$ an $R$-module, where $M$ is generated by $n\in\NN$ elements. Now suppose $\varphi:M\to M$ is an $R$-module homomorphism such that $\varphi(M)\subseteq IM$ for an ideal $I\subseteq R.$ Then we can write \[\varphi^n+\sum_{k=0}^{n-1}\mu_{r_k}\varphi^k=0,\] where $r_k\in I^{n-k}.$
This follows from the above (albeit terse) discussion. $\blacksquare$
Anyways, we will not actually use this strengthening in our discussion of Nakamaya's lemma. Here is our main attraction.
Theorem. Fix $R$ a ring and $M$ a finitely generated $R$-module. Further suppose there is an ideal $I\subseteq A$ such that $1+I\subseteq A^\times$ and $IM=M.$ Then actually $M=0.$
We begin by using the $IM=M$ condition. Here, the determinant trick with $\varphi=\op{id}$ gives us some expression\[\op{id}^n+\sum_{k=0}^{n-1}\mu_{r_k}\op{id}^k=0\]for $r_k\in I.$ Because $\op{id}^\bullet=\op{id}$ is the identity endomorphism, we can rearrange the above into\[\op{id}+\sum_{k=0}^{n-1}\mu_{r_k}=0.\]In particular, because each $r_k\in I$ and because $\mu_\bullet(I)$ is a commutative ring (again, $M$ is also an $I$-module), we find that $\mu_r:=\sum_{k=0}^{n-1}\mu_{r_k}\in\mu_\bullet(I).$ Noticing that $\op{id}=\mu_1,$ the above reads $\mu_{1+r}=\mu_1+\mu_r=0.$
However, now we use the fact $1+I\subseteq A^\times,$ implying $1+r$ is a unit; let $(1+r)^{-1}$ be its inverse. To finish, we fix any $m\in M$ so that\[m=(1+r)^{-1}\cdot(1+r)m=(1+r)^{-1}\cdot0=0\]because $\mu_{1+r}=0.$ Thus, $M=0,$ and we are done here. $\blacksquare$
I am under the impression Nakamaya's lemma is mostly important in the case where $R$ is a local ring with $I=\mf m$ its maximal ideal. We have to show that $1+\mf m\subseteq R^\times$ in this case, but this is not hard.
Lemma. Suppose $R$ is a local ring with maximal ideal $\mf m.$ Then $1+m\subseteq R^\times.$
Indeed, pick up any $m\in\mf m,$ and we examine $1+m.$ Well, $1+m\in\mf m$ implies that $1=(1+m)-m\in\mf m,$ which implies $\mf m=R,$ violating that $\mf m$ is a maximal ideal. Thus, $1+m$ is not in $\mf m.$
On the other hand, $1+m$ not a unit implies the ideal $(1+m)$ can be lifted to some maximal ideal by Zorn's lemma, so the uniqueness of the maximal ideal implies $1+m\in\mf m,$ which we just showed cannot be the case. Thus, $1+m\in R^\times,$ which is what we wanted. $\blacksquare$
In fact, more is true: if $R$ is a ring with $\mf m$ a maximal ideal with $1+\mf m\subseteq R^\times,$ then we can actually conclude that $R$ is a local ring. We outline because this is cute. Very roughly, this is true because a ideal $\mf m'\not\subseteq\mf m$ would have $\mf m'+\mf m=R,$ which promises $m'\in\mf m'$ and $m\in\mf m$ such that $m'+m=1.$ But then\[m'=1+(-m)\in(1+\mf m)\subseteq R^\times.\]Thus, $\mf m'$ is actually all of $R.$ This verifies that any maximal ideal must be $\mf m,$ for otherwise $\mf m'$ would contain elements not in $\mf m,$ requiring $\mf m'=R.$
The point of all of this is to say that if we want easy access to an ideal $I$ such that $1+I\subseteq R^\times,$ as is required by Nakamaya's lemma, in particular a maximal ideal $I,$ then we really want $R$ to be a local ring. In particular, having $I$ be maximal means that $R/I$ is a field, which is nice.