Ladder of Algebraic Structures
jwkennington.comI first encountered a diagram of algebraic structures at the end of Jeevanjee's second chapter, "Vector Spaces", which elegantly summarizes the high-level differences in structure between sets, vector spaces, and inner product spaces. I've attempted to augment this map along two dimensions: a structure dimension that aims to measure the number of attributes an algebraic object has, and a specificity dimension that measures the number of constraints placed on each attribute.
This is aimed primarily at mathematical physics, and is intended as a quick reference -- it's obviously incomplete and isn't a substitute for Hungerford, Lang, or [insert favorite algebra book].
I hope you find it as helpful as I did in making it!
There is also the abstract algebra cheatsheet [1]. Not my work, I have just bookmarked it a couple of years ago.
Most algebraic structures are best understood by which axioms it satisfies. For example basically every subset of axioms of an abelian group is useful enough to have a name. Wiki has a really nice table:
Semigroupoid
Small Category
Groupoid
Magma
Quasigroup
Unital Magma
Loop
Semigroup
Inverse Semigroup
Monoid
Commutative monoid
Group
Abelian group
This is one of those things were better naming would make mathematics easier, imho. The words are just so random and inconsistent.
Example: Commutative and Abelian are synonyms, but there's "Commutative monoid" and "Abelian group". Why not use same adjective. But of course also the random bag of words that have nothing to do with the concept, like magma.
Could take a page out of the biologist's book. "what's this thing?" Transcriptase - enzyme (-ase) which transcribes - DNA to RNA. "What about this" Reverse transcriptase - does the reverse of transcriptase.
Angiotensin-converting enzyme - does exactly what it says on the tin. You can lex it even further:
- Angio - heart (from ango, vessel)
- Tens - from hypertension, vis tendo, tendere, to stretch.
- (-in) - suffix associated with polypeptides:
- Convertere - turn around, from:
- Con - with
- Vert - turn
- En - inside
- Zyme - from zume/zymē - leavened, loosely, biological thing which causes leavening
It just makes so much sense! Lexemes are so cool. Like digging into linguistic source code.
We suffer here from lack of classical education. Greek and Latin would probably help.
How did they get people to agree to it? Mathematical terminology is a crime, but the problem is that it's very hard to get people to coordinate on different terminology.
Why not call the DNA to RNA enzyme reverse transcriptase and the RNA to DNA one transcriptase?
Commutative and abelian aren't really synonyms. "Abelian" is reserved for objects that have a certain amount of rigidity. Commutative monoids are squishy, while abelian groups very rigid. Another place you'll see the name "abelian" is "abelian Lie algebras", which are also rigid. "Abelian categories" axiomatize the kind of rigidity abelian groups have.
Magmas are usually called "groupoids", but there's another generalization of group also called "groupoids". I'm actually not sure they really deserve a short name, rather than just "set with a binary operation", since there isn't much you can say about them in that generality that you can't generalize to "set with two binary operations", "set with a binary and a trinary operation", etc. The argument for a name is it gives you something to modify, since there are interesting special cases such as "medial groupoids". (An example of a medial groupoid is the real numbers with the "average of two numbers" operation.)
https://en.wikipedia.org/wiki/Rng_(algebra) is a small example of trying to use more consistent names, but it's too punny for my taste... (Rng is a ring without an identity element)
There's also a rig (a ring without "n"egatives): https://ncatlab.org/nlab/show/rig
I agree. Too cute for its own sake.
Aside: "wiki" is a term referring to a general class of software. The name of the crowd-sourced, free encyclopedia is "Wikipedia", as it is built with wiki software.
In other words, Wikipedia is a member of the set of wikis. You wouldn't call "5" just "integer", e.g. it would be confusing to say "there are integer fingers on one hand".
If someone came up to me and said, "there are integer fingers in one hand" I would be the opposite of confused.
It's the kind of "technically correct" that is literally useless.
I'd like to seen an extension of this table with the negation of these axioms
Here is an beautiful old paper about data structures built using a binary join operator, exploring the 16 possible outcomes for properties: unit, idempotent, associative, commutative.
The resulting grid can be factored around set, bag, list and binary tree, with empty/non-empty variants.
Then there is interaction of the structures with binary operators on the data elements themselves, giving a nice analysis of map, filter, fold (reduce) in functional programming.
A.Bunkenburg, The Boom Hierarchy
http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.49....
There is a rich literature if you chase the references back and forth, starting from Bird-Meertens Formalism (Squiggol), Hoogendijk, through Backhouse and Malcolm, to Meijer and McBride.
I think negations usually don't prove interesting, and mathematicians essentially use "non-" to mean "not necessarily". So the theory of "noncommutative rings" includes the theory of "commutative rings" as an easy special case.
What would you do with that?
For example, I can see the use of commutativity (ab = ba) and anticommutativity (ab = -ba), but I'm not sure what I'd do with the negation of commutativity (ab ≠ ba).
Nope: the negation is "there is a couple a,b such that ab!=ba", which means just "strictly not commutative group": I do not think there is a relevant theory to be done about them (otherwise, I guess it would have been done).
Ah, whoops. That seems even more useless, though.
Non-commutative also means that if there are units, there may be different left and right units:
1L * A = A = A * 1R
What I would love are examples of how they are useful.
For me the first example where I really got why algebraic structures was useful this video on using abstract algebra in analytics[0].
This helped me grasp something that I had read from Alexander Stepanov[1] that I hadn't fully understood before (not being familiar with the algebraic terminology):
> I suddenly realized that the ability to add numbers in parallel depends on the fact that addition is associative...In other words, I realized that a parallel reduction algorithm is associated with a semigroup structure type. That is the fundamental point: algorithms are defined on algebraic structures.
I think the use case of building infrastructure for parallel/distributed computation as described above is a nice, concrete example of why using abstract algebra in our programs can be useful. It certainly isn't the only use case though. Other things include managing complex control flow, or passing an implicit context through a computational pipeline.
[0] https://www.infoq.com/presentations/abstract-algebra-analyti...
Thank you!
This is a really cool presentation where the authors "step up the ladder" to design a really elegant API for animations as semirings (where * is used to sequence animations, and + for animations running in parallel), and then go on to implement it in Swift: https://bkase.github.io/slides/algebra-driven-design/
Thank you!
see the second-to-last slide for a mapping from Algebraic Structures to Computer Science Concepts. http://comonad.com/reader/wp-content/uploads/2009/08/Introdu...
Thank you!
Thank Edward Kmett! That slide is single-handedly responsible for me getting into abstract algebra.
Regardless of the "ladder" (or any other attempts to organize algebraic structures), what I find interesting (and somewhat unexpected) is that each particular structure exhibits so many features exclusive to it and such a rich behavior that is not found in any other structures - even closely related ones (like, for example, commutative vs. non-commutative rings) - that these attempts of organizing them and of some kind generalization seem to have not much value. It is only category theory that has managed to bring in something of a common viewpoint on many mathematical constructs (and not just those in algebra).
Slightly related: http://nicolas.thiery.name/Talks/2018-10-08-CategoriesPyData...
(How this is implemented in SageMath.)
Likewise this is a pretty useful chain of inclusions:
commutative rings ⊃ integral domains ⊃ integrally closed domains ⊃ GCD domains ⊃ unique factorization domains ⊃ principal ideal domains ⊃ Euclidean domains ⊃ fields ⊃ finite fields
Was just about to post that. You can get very fine-grained from sets to any mathematical structure based on sets, where the inclusion chain is a spectrum of structureless to structured.
Categories are Algebraic structures, that are related to this hierarchy:
- Monoids are Categories with a single object.
- Algebras (Non-commutative, Associative) are k-linear Categories, with a single object.
- Any object X in a k-linear category comes with an algebra: R = End(X) = Hom(X,X).
- Any other objects comes with an R-module: Hom(R, X)
- In some cases, we can use this to describe the category as a category of R modules: https://en.wikipedia.org/wiki/Gabriel%E2%80%93Popescu_theore...
See page 4 of https://leanprover-community.github.io/papers/mathlib-paper.... for a part of the hierarchy of algebraic structures in the Lean theorem prover. (If you give it a normed field, it will use this hierarchy to automatically deduce that it is also a ring or a topological space, etc...)
Small note: people often (but don't always) assume that a ring is unital (has an identity element), and that an algebra over a field is unital and associative.
Also, the label "algebra" is vague here, and refers to an "algebra over a field", but sometimes it refers to an "algebra over a ring".
This diagram doesn't show semigroup and monoid. Although these structures aren't used in physics much, I find them very useful for understanding groups.
Out of clarity this is an "algebra over a field" vs a more general concept of an algebra over a ring. More generally an algebra A, over a ring R, an R-algebra, is a ring A equipped with a map Hom(A,Z(R)). Algebra over a field is a special case. Here's a "fun" object for you to consider:
> More generally an algebra A, over a ring R, an R-algebra, is a ring A equipped with a map Hom(A,Z(R)).
I don't think that's the usual definition of an algebra. For example, it would mean that there is no difference between an algebra over a non-commutative ring and over its centre, which seems weird; and it clashes with the usual habit to regard every non-0 commutative ring as a non-trivial ℤ-module, whereas, for example, the only homomorphism ℤ/2ℤ → ℤ is the trivial one.
I would expect rather the datum of an R-algebra structure on a ring A to be a ring homomorphism R → End_{gp}(A). EDIT: Now that I think of it, maybe got your A and R mixed up and meant the more restrictive definition, whereby the ring homomorphism I mention is supposed to factor through R → Z(A) → End_{gp}(A)? I'd call this more restricted notion, at least over a unital ring R, a unital algebra A (but often people want implicitly to assume unital-ness).
I think that usually when people say “algebra over a ring” they assume that ring to be commutative, so that the word “bilinear” in “bilinear multiplication” is useful. It’s possible to define an algebra over a non-commutative ring as a bimodule (rather than left module or right module) equipped with a bilinear multiplication, but I have rarely seen this used.
The definition the parent poster used (or intended to use, but wrote the wrong way around, I believe) was that an algebra over a non-commutative ring is just an algebra over its commutative centre. (In which case, we’re still really just talking about algebras over commutative rings).
But the definition doesn't work even for commutative rings; as I mention, it says that the only ℤ-module structure on ℤ/2ℤ is the trivial one, which is not the usual understanding of the term. I agree that, if you switch A and R in Hom(A, Z(R)), then an element of the Hom space Hom_{ring}(R, Z(A)) makes A into an R-algebra, but I would argue it's not the only way; there's a map Hom_{ring}(R, Z(A)) -> Hom_{ring}(R, End_{gp}(A)), but it need not be surjective if the rings aren't assumed unital. Consider, for example, a polynomial ring R = k[t] and its ideal A = tR, which has a natural structure of an R-algebra.
At long last, my disastrous and scarring grad-school experience in representation theory can save the day!
First, regarding the OP: Having spent many years studying algebra, I don't find the hierarchy of axioms to be very useful in thinking about these things. Sure, you can think of a field as a "commutative ring with inverses", but rings and fields present themselves so differently that this connection doesn't end up being all that useful. Fields are not rich enough on their own to support much interest. You'll find them mostly as building blocks rather than powerful tools in and of themselves. Ditto for modules and vector spaces. Sure, a module is "like a vector space but over a ring", but vector spaces are so boring by themselves that they show up mostly as scaffolding. The study of modules, on the other hand, is its own branch of mathematics. It's much more useful to think of them in terms of what you actually do with them.
Now, on to definitions. The following few paragraphs are all very small-minded and look far more complicated than they actually are. It all encodes pretty much what you'd expect.
If you want to define algebras over commutative rings, you need to start with left- and right-algebras. A left-algebra is an abelian group A equipped with a map \phi: R -> End(A). The abelian group structure defines the addition in the algebra, and the map defines the left-multiplication: if r \in R, and a \in A, then you define a times r as \phi(r)(a), where \phi(r) is an endomorphism on A.
A right-algebra is the same, only the map is from R to the opposite ring of End(A), where the opposite ring is the one you get by just reversing the multiplication. You need to do this because associativity demands that you compute ((a)r)s, where a \in A, r,s \in R, by first acting on a with r, then by s. But with the usual conventions of composition of functions, \phi(r) \circ \phi(s) means you first "do" s, then r. So you need to flip it. Working with left- and right-algebras is a pain in the butt because you have to carry around a ton of left-right nonsense.
A bialgebra (in the literature I read) is a an abelian group that is both a left- and right-algebra. A central bialgebra is one where the left and right multiplication are the same, which is not a given. Noncentral bialgebras are especially annoying, mostly because you have to figure out how to do pre-subscripts in LaTeX so you can write nonsense like "_R M_S".
Obviously, all of these things collapse if R is commutative. Noncommutative ring theory requires a special kind of patience. And don't even get me started on noncommutative geometry.
You are speaking with a representation theorist, too. It seems to me that we agree on the definition. (I agree that I was writing as if `R` were automatically commutative, after having made a big fuss about the possibility that it wasn't. Incidentally, if you feel insufficiently scarred, you might like to expand your stable of algebras: there is the notion of a coalgebra, which is dual to that of an algebra; and I believe that the usual notion of bialgebra is of a ring equipped with the structure both of an algebra and a coalgebra: https://en.wikipedia.org/wiki/Bialgebra ; but maybe it's different in the world of algebras over non-commutative rings, which is not my speciality. Then among the bialgebras are the Hopf algebras, etc.)
Somewhat ironic that lattices are missing from this lattice of algebraic structures :-) Though I guess they might not be as important in mathematical physics as in some other areas.
Max Tegmark has a larger diagram for math in his paper:
Is "the theory of everything'' merely the ultimate ensemble theory?
https://arxiv.org/abs/gr-qc/9704009
and a sketchy one for physics in his paper:
The Mathematical Universe
Robert Geroch's _Mathematical Physics_ is organized around algebraic structures, motivated by category theory.
+1 for Geroch's book
It's a really unique book -- was pleasantly surprised with it. It's probably the most lucid introduction to category theory I've read.
Another attempt at that diagram, with more structures but less detail on how they differ:
How does geometric spaces like affine, projective etc come into this taxonomy.
A projective space is defined as a quotient of a vector space under the equivalence relation x ~ y <=> (exists k =/= 0 such that x = ky).
Why is "commutative +" a step up rather than a step to the right? I guess there should be Abelian groups and commutative rings somewhere between groups and modules.
Probably because the diagram originated in a Vector Spaces book, and commutativity is viewed more as a valuable property than a structural constraint.
Do physicists have any use for non-commutative algebra? It already seems pretty niche in mathematics.
All of quantum mechanics is non-commutative algebra. The commutative relation [x, p] = i*hbar gives you the Weyl algebra, for example.
Rotation group maybe? And those gauge symmetries... heck, just look into quantum mechanics operators.
Not often that something on HN causes me to hit print, but that figure is worth printing and tucking into a book.
Questions:
1. Isn't this more like a tree, where only one path is shown?
2. Is it possible to find a pattern and extend the ladder in the most logical way?