Why Mathematics is Boring (2007) [pdf]
math.ucr.eduI always struggled (and still struggle) with math.
A couple of years ago, randomly browsing YouTube, I came across this home made video asking how they figured out the distance to the moon before modern technology. The host starts out small scale showing he can calculate the distance to things in his back yard using trigonometry and then scales it up to the moon.
My mind was blown, because no one ever told me that. It was simple, anyone could understand it. When I was in school, all I was told was to memorize abstract formulae like calculating the length of sides of triangles based on angles and known length of one side. It was never contextualized to any actual, let alone interesting or fascinating, applications.
Most math textbooks contextualize it like that, so I guess yours did too. Just that in school kids almost never care about that, they just want to pass the tests and therefore ignore all contextualization and just remember the minimum possible amount required to solve test questions. So likely you already saw those things many times before and forgot since you didn't find it important back then.
That is the main struggle for many math teachers, they try to do all these fun and interesting explanations, and the kids just ignore it and go directly for the formulas and forgets everything else. the problem seems easy to solve until you have experienced trying to apply it yourself to a real class of kids needing the material for real grades. It can be done, but a teacher who could do it could make way more money in entertainment etc, since that is what is required to get kids to pay attention.
There was a book called “Calculus the EZ way” https://vpl.bibliocommons.com/v2/record/S38C1299093 that I discovered in the summer before Uni. It was so much fun to read I learned it again just for the sheer pleasure of it. Even though they used medieval characters in a magical kingdom to explain calculus that was not the innovation there. The real thing was that they explained the problems they encountered with existing methods before they introduced a new method.
That was novel. That is how research is done…you start with a problem and figure out a way forward.
That is not how textbooks are written. It’s like, why waste our valuable paper to print the wrong way to do something even if it helps people learn? They only print the right way to do it. The student loses the ability to participate in the discovery process and just becomes a dumb initiate who is forced to believe whatever is written down. It’s more like a degenerate religion you’re forced to memorize without the inspiring examples of all the saints and martyrs who showed others the way before you.
I couldn't agree more, by decontextualizing everything you're really robbing students of pretty much everything.
> Most math textbooks contextualize it like that
I'm all but certain mine didn't. For sure there was the odd contextualization and examples of real world applications here and there but definitely not enough and definitely not interesting or fascinating ones.
Eg if there was one for trigonometry it would have been something lame like "Alice is standing in the field some distance from Bob and Billy. Calculate the distance based on <trigon blabla>"
Hardly riveting stuff :P
I think the tricky part is that students during a particular day have to very much crunch or limit the amount of information they take in when jumping between subject to subject.
They don't have the time, attention spans or memory to be able to take in both the contextualization and to understand the formulas. After all it's only the formulas' that really matter in the end when performing the calculations in test/exams when you have to show your working out. I also find that I can understand the contextual quite easily but it's another thing to then apply it through formulas.
This is coming from a non-teacher but past student.
At University I had more time to think about the contextuals during my degree, but not in high school jumping from subject to subject.
Yeah, my run of the mill textbooks tried to invent fun scenarios like that. But alas, we just made of Adam for carrying a hundred apples back home for dinner.
The problem then seems to be tests that are too abstract. The students are optimising for tests that aren't requiring them to solve real-world problems.
You can't post something like this and not post the video.
This isn't it, but Terrance Tao does the entire cosmic distance ladder:
Here is a fun textbook I found on astronomy http://gron.ca/math/dupuis_1910/dupuis.pdf that is written in a more pragmatic style.
To me it was sometime in high school Physics class that math started to have any semblance of utility to solving actual problems. But by that time most students had already been bored to death with years of the most obtuse memorization and test passing behaviors that it was all lost. Even then Physics was mostly a blizzard of formulas with absolutely minimal explanation and application. The exams were basically "cram as many of the formulas in the book as you can onto a single sheet of paper and then plug and play during the exam".
I did not do well in either set of subjects in K-12 -- even to the point that my graduation from high school was threatened. In college I forced by way back through it all by sheer force of will and got my A's.
There's something fundamentally broken in math/science pedagogy as these subjects aren't really all that difficult. There's far too much time spent memorizing things that are trivial to look up and way too little time understanding how to use them.
An analogy might be learning to cook, and spending all of your time remembering precisely how many spoons, bowls, cups, and cloves of garlic or other ingredients you have. And doing that kind of thing for years, and maybe seeing a demo once of pouring water into a cup. And tests might contain problems like "a party of 5 is coming over for dinner, are you able to set places for all attendees for a 7 course meal?"
The real message being sent is this: "Sorry kids, actually cooking from recipes is only for academics, and to get your PhD and be allowed into the hallowed halls of these academic cooks you must come up with one original recipe (edibility will be determined by peer review)".
In college I retook everything from Algebra up and found the math pedagogy focused more on symbolic manipulation and getting used to how that works in each subject rather than drilling arithmetic in various guises. Tests that required various pre-derived formula were usually just an open book problem. And what mattered was how one went about solving the problem, not the rightness or wrongness of it. Calculators were absolutely expected so you didn't waste time fighting with trivial mistakes.
The sciences usually had a mandatory lab portion that forced application of math to the problem space. Because the labs typically had you collecting your own measurements, it forced you to work through the calculations yourself anyways since there was nowhere else to look up the answer. Again the methods and approaches were where the grade came from, not the slavery to memorization.
Still, while I think the approach I encountered in college was much better than grade school, it still wasn't as good as it could be.
If you read any science papers, they start with a clear introduction with the aims, claims, importance and novelty of the work. A lot of math papers (but not all) just start off with a dry statement of what the theorems being proved are, and jump right into the proofs.
I always wonder why editors don't understand the importance of these things and don't enforce them.
Relatedly, a lot of recipe books just have a dry statement of what the ingredients are and how to combine them. They should be more like CS and science papers, and explain the narrative of why the recipe is exciting, and where it comes from, and what it means for the cook.
(Posted from a parallel universe.)
> why the recipe is exciting, and where it comes from, and what it means for the cook
I know you mean this as a joke, but many cooking books are like that, and I for one value knowing the history (and science, as in "The Food Lab", highly recommended) of what I'm cooking or eating
> They should ... explain the narrative of why the recipe is exciting, and where it comes from, and what it means for the cook.
You've just described practically any modern recipe website.
I think that's the joke.
As other user commented, cooking books meant to be read rather than used as reference material are actually written like that.
Makes me wonder if that's the difference, do people not check math papers unless they are specifically interested in that very proof or such?
I actually find the forced structure of most non-math journals to be pretty terrible for readability. Why are results and methods forced to be separate? It means you have to play the 'connect the dots' game of figuring out which parts of the methods refer to which part of the results. This is obvious to the writer but incredibly challenging in some circumstances for a less-informed reader. Math papers do not have such forced separation and have clearly numbered claims and proofs.
An example is a paper with several mouse experiments. In results it'll often have a single section saying that mice were raised such and such under conditions A or B. They were treated with X or Y and samples were collected after certain times. But was Y treated on A or B mice? From results its clear that X was done on both but no mention for Y. I guess A is more the default and they'd have specified B if that were the case, so probably A.
I think them being separated makes sense if you think how it's really targeted towards in-field experts. Usually if you know the field it's very clear which results come from which methods.
Not saying the status quo is optimal of course just why it is the way it is now and probably won't change soon
I for one welcome the "dryness" of mathematical writing. It feels clean, like reading a story without distracting ads.
A beautiful advice that I received as a student was to write mathematics as a series of definitions, propositions and proofs. No text is allowed to exist outside of these three. In practice it is difficult to enforce, but it is helpful to keep this as an aim.
> A beautiful advice …
Oh my goodness! So there are actual people who prefer that? I had to chew my way a fair share of books like that and I can’t stand them. Clearly the person who thought of these definitons, propositions and proofs had some reason to think of them. Sometimes they were trying to solve a problem, sometimes they were combining ideas, sometimes they were looking for structures with certain aesthetic properties. There was always a why behind why they thought about this and not something else. Sometimes there are multiple possible such reasons, that is fine. In that case the author can select whichever they fancy the most. Every time i had the misfortune to read a book like what you describe it felt like I was eating powdered milk without reconstituting it. There was a clear chain of thought between ideas and they choose to just hide it.
I understand that math requires work. One needs to get a paper and a pencil and work out examples, check proofs, play with definitions. But why wouldn’t the author write down what made them care about the next item?
> it is helpful to keep this as an aim.
Why?
Trying to study a very abstract concept without any examples is almost useless. There is, perhaps, some philosophical sense in which we could only be writing down mathematics for god to read, but humans searching the proposition space necessarily move heuristically. Formal generalization is fine (and even great) for working with objects, but pretty terrible for recognizing them. For example, (and also meta-example), most people do not need to spend a lot of time convincing themselves that the pigeonhole principle is true, at least in the finite case, but show most people a problem that it solves, and they will be unable to do so.
As a non-mathematician this seems to be lacking a little something. Would you not want your paper to provide some indication of what you're attempting to communicate and why? Or is that information (as the joke goes) possible, and therefore trivial, to infer?
A great deal of maths is that you get to use theorems for purposes that they were not intended for. Presenting the theorem in a "pure" form thus allows to approach it without pre-conceptions.
I love the analogy with cooking recipes in another comment. Math papers are like recipe books, deliberately devoid of their social context. The same recipe may mean different things to different cooks, even contradictory! Having a clean, neutral description of the recipe allows both cooks to safely refer to the exact same recipe, without endorsing contexts that they may find odious.
I agree that knowing the context in which a recipe was created, and the contexts where it has been used, is very useful. But it would be extremely annoying to have this explanation interleaved with the recipe description itself. This information is best kept separate. Then, at the beginning of the recipe you can have a list of links to cooks who have written different things about it.
> A great deal of maths is that you get to use theorems for purposes that they were not intended for. Presenting the theorem in a "pure" form thus allows to approach it without pre-conceptions.
Plenty of physics papers end up being useful in ways the authors never conceived. I don't think the authors writing down their best guess of the importance of their results, prevents others from using the results or techniques in alternate ways. In fact, I frequently see this happening.
But the authors writing down their best guess of the importance of their results helps others judge the minimum importance of the results, and decide whether they want to read the paper in the first place.
> But the authors writing down their best guess of the importance of their results helps others judge the minimum importance of the results, and decide whether they want to read the paper in the first place.
In many cases it will be unlikely that the authors even know the importance of the results. Forcing them to come up with an explanation would be useless at best, and an unbearable burden at worst (e.g., when a student has proved a theorem proposed by his advisor).
There are mathematical reviews, where mathematicians with a higher-level view on the field point to particular papers and explain their importance. Also, many journals have an editorial column that explains the papers on each issue.
While I was teaching c++ I would sometimes look at programs from a colleague and try to rewrite them in a more teachable form using c++14.
The abstraction capabilities in modern c++ made it more fun and allowed me to condense programs in totally unexpected ways. More abstract code was simpler and sometimes easier to understand.
But after a while (usually my version 4 or 5) if I abstracted it too much (Templatized it, made work for Unicode or char types) etc. the complexity would shoot up again. It became incomprehensible even to me even though I had written it. I knew it worked because it gave the same result but it was no longer anchored/rooted in anything real.
There is a zone of usefulness that is between the completely abstract and the concrete. Physics lives in this zone…it uses mathematics but they don’t feel the need to generalize to N dimensions or to assume the constants of nature are variable—unless they have to.
This is why all good descriptions of mathematics start by showing a concrete problem they want to solve. Gauss invented the FFT algorithm to simplify his calculations of the orbit of Ceres. He had the numbers in front of him and tried to reduce his computational workload by exploiting a repeated pattern in the computations. Teaching the FFT as a fait acompli and showing asteroid orbit as a sample application is ass backwards.
Reminds me of my general principle of the best solutions being "square" in the sense that the effort you expend on them is quite similar on any axis. By the time you're past abstraction's diminishing returns for the given problem, you're spending way more effort on that than anything else (which is understandable - it's fun!) and your efforts aren't "square" any more.
(A toy example of the 'square' thing - imagine you have to make 10,000 widgets for 1 dollar each. Making them all by hand for $1 without optimizing the process would be inefficient. So would spending $9,999 to build a machine which could make widgets for $0.0001. But spending $100 to optimize production so you can make the widgets for $0.01 each is a massive win ($200 cost vs. $10,000 for the other alternatives.)
That is an interesting thought. What’s it an example of? Convex optimization?
As a mathematician, let me assure you that the point of view in that comment is not popular amongst mathematicians.
You are absolutely right, context is king.
I'm sorry, but this is a horrible advice. It sacrifices communication for the sake of enforcing an arbitrary aesthetic.
Until mid-20th century, mathematics had never been communicated in this austere manner.
When you are a working mathematician, you never start with a definition. You start with a context in which your exploration begins.
It might be a question which someone else asked that interests you (and there is a story as to why). It might be that you don't even have a question, but the objects of your study are not studied enough, so you hope you stumble into one. You can easily tell why this is an interesting thing to look at.
You do some calculations, take a look at a few examples, see if you can make a stab at the chaos in front of your eyes and find a pattern, which we formally call a conjecture.
Then you see if this pattern holds in other cases, and why. This shapes the backbone of the proof.
Once you have a basic idea of a proof, you can formulate a theorem. In the formulation, you list all the conditions to which your proof applies. The pattern might be much more general, but your proof might work e.g. "only in cases when the order of the group is invertible in the base field", or some other.
After all the work is effectively done, you decide that a concept that appears repeatedly in your reasoning deserves to have a name. Something convenient to call it by, so you don't have to repeat yourself. So you make a definition.
You then decided to share your joy with this world, and write a paper.
You listen to the "beautiful advice", and throw out anything that makes your paper interesting.
Context goes out of the window, along with any hope for the reader to have any idea why your paper is worth looking at. At best, you'll advertise this in talks, or explain over beers. Side note: you will have to drink a lot of beers to make it in math.
Then, you follow the advice again, and lay things out in the order exactly opposed to the one you were thinking in: definition - theorem - proof - conjectures - examples - context.
Wait, you already scrapped context, and the examples you started with aren't illustrating the results you ended up proving.
So you tidy up your paper, come up with more specific examples, and remove anything that wasn't on the direct path to your result.
Having climbed to a place where you can see better, you pull the ladder up. Good luck to anyone outside the group of five people who are actively working in this niche!
And finally, you write an abstract to your paper, where you mention the things you defined. The abstract doesn't make any self by itself, one needs to be in-the-know to get half of it, and read your paper to understand it.
In practice, it acts as a "No Trespassing" sign for the outsiders (i.e. anyone not in direct contact with the five people you have beers with at the Annual Niche Field Conference).
Satisfied, you lean back and post it to arXiv.
It's been a beautiful day, you think. This practice is difficult to enforce, but you kept it as an aim, and got pretty close to perfection (as exemplified by a Bourbaki text, or anything by Serge Lang, but I repeat myself).
Somewhere not too far away, a student in the class you're teaching cries.
----------------
I said "you", but as someone who's written a couple of math papers, that's really me too. We are all taught in a horrendously backwards (literally!) manner.
This perversion of the beautiful art isn't a new observation. I can't write better about it than Vladimir Arnold[1] (a titan whose name is, I hope, familiar to you).
It's worth a read to anyone who has ever studied mathematics:
[1] https://www.uni-muenster.de/Physik.TP/~munsteg/arnold.html
I do not disagree at all with you. Arnold is my mathematical hero and his advice and insight is invaluable. I understand that math is done starting with proofs, and ending with definitions and axioms.
Yet, I have witnessed many young mathematics students that could not write a concise, self-contained proof, nor understand its value. I certainly was one of those, and this advice helped me. For these people, it is helpful to learn how to organize your thoughts in an over the top, nearly bourbakist, formal way. Also, the correctness of proofs is much easier to check this way, and any incorrect or illogical stuff sticks out immediately. Then, once you have written your stuff in that dry style, you can add some glimpses of discourse that become much more valuable than if you had started with some informal hand-waving. This is pretty much the writing style of Arnold: his proofs are breath-takingly concise and elegant, and there is an insigthful discourse around them. The proofs without the discourse stand on their own, but the discourse alone would be worthless.
I like your analogy of climbing the cliff and pulling the ladder. But there is another cliff that goes even higher and you needed the ladder for that one! Of course you need to help others to build their own ladders.
> Somewhere not too far away, a student in the class you're teaching cries.
Maybe, maybe not. In any case, I agree that you cannot teach math in a purely bourbakist style. I prefer a "visual" style like that inspired by the books of Arnold, Strang, Needham, and I am the sole teacher in my lab that seriously uses the word "amplitwist" to refer to the complex derivative :)
My response is too long for HN; breaking up into two parts :)
Part 1:
==============
I feel like there is a survivorship bias in your assessment: the students who would benefit from learning to construct a rigid argument without holes are already the ones who can hand-wave their way to the result, i.e. they already have the motivation and intuition to get there.
On the other hand, I feel like over 9 out of 10 people who could and would enjoy advanced mathematics get turned off by unnecessary formalisms some time in highs school (Lockhart's Lament describes vividly how Euclidean geometry there is massacred - and that's both the first, and often the last time they see proofs!).
To add insult to injury, rigid reasoning is not introduced to students unless they are math majors, and even then, it's when they take a Real Analysis course. The way we teach intro Calculus and Linear Algebra classes should be classified as a Geneva Convention violation and a crime against humanity: all the intuition you can get from a Bourbakism with none of the rigor.
It doesn't need to be this way. Even rigor can be fun. Just like everything else, rigor is a part of mathematics that we do for a reason. Once you approach the very concept of rigor the same way you approach, say, derivatives, you will see that there is no need to impose it on people.
How many times have you seen a "proof" that 0 = 1, usually derived from a coy division by zero, or abusing square roots, etc? People repost those on Facebook as memes. They are fun!
But also, they are the motivating example for rigor. After all, that's the entire reason we need it in the first place: to avoid arriving at incorrect conclusions.
Without having a vast assortment of examples of arriving at incorrect conclusions, rigor is both unmotivated and unnecessary. Newton and Leibniz didn't need rigor when they invented Calculus, after all; hand-wavy infinitesimals did just fine. Why should the students bother?
There is no value in rigor in and of itself. All the effort to put mathematics on a rigorous footing gave us are things like Banach-Tarski Paradox (which is, objectively, absurd and only shows that the extent to which math models physics goes so far!) and Godel's Incompleteness Theorem (which shows that even attempting to reach Perfect Rigor is futile).
You don't need to introduce Peano's axioms to talk about number theory, and neither does any number theorist, really. And we wouldn't want any student to crank out a Principia while working on their topology homework.
So, treating rigor as a branch of math (which it is!), it needs to be introduced and taught just like any other branch - starting with context, stories, pitfalls, and seeing all the motivation for why we do things the way we do.
It starts with basic critical thinking, logic and philosophy classes, where people learn the difference between "All liberals support free healthcare" and "All supporters of free healthcare are liberals" (....well, I wish).
Going further, it's seeing the "proofs" that 0=1, or that all cats are grey (by induction). The latter "proof" is still the only thing that motivates me to check the "obviously true" things, like the induction step being applicable to the base case.
In high school, I had a great little book called "Lapses in Mathematical Reasoning" by Bradis and co-authors. It was a perfectly accessible assortment of gotchas.
Zeno's paradoxes are a motivation for some of the rigor of Calculus (convergent sequences and infinite sums are the answer to the paradox).
And when we look at rigor like this - like a thing that needs to be motivated, not an a-priori good - we see a rather disturbing pattern that rigor has been introduced at the expense of clear reasoning.
Take Calculus. Teaching it with limits, epsilon-deltas, etc. without giving a motivation for why this complex machinery is needed is purely a waste, and a thing that made many people despise math (it turned me off from analysis for a very long time, personally).
The problems that this rigor addresses aren't even taught to the vast majority of people who take Calculus! Everything that the intro course covers can be taught with infinitesimals just fine without introducing the epsilon-delta rigor.
And, in fact, epsilon-delta rigor can be entirely dispensed with (because the infinitesimals can be put on a rigorous basis, with non-standard analysis). Epsilon-delta was not an achievement. It was a defeat. It was the greatest minds of the time not being able to figure out how to add rigor to the concepts that Leibniz and Newton introduced, and so they simply powered through and worked around the concept of infinitesimal to make some hairy math work.
With rigor, just like with anything else, we have to ask: what's the return on investment there? Is it a good bang for the buck? Why is hand-waving bad?
Having learned a subject, we know where hand-wavy reasoning can lead. We know that not all cats are grey, or that continuous function doesn't need to be differentiable anywhere.
But there is no value in rigorous reasoning in Calculus if we are not running into monstrosities like the Weirstrass function. And, when we start out, we don't - because the Nature is quite nice, math-wise. At least on a day-to-day scale.
Adopting Arnold's mindset, the amount of rigor in a mathematical argument is somewhat like the amount of precision in a physical model.
No sane person would start teaching physics with Einstein's relativity. But in math, not only we do that, we never teach Newton's Laws - and in introductory classes, we don't even explain the formulas!
Imagine forcing high-schoolers crunch Einstein's tensors where all they needed was F = mg, without ever explaining what curvature even is or why it's needed ("it'll come in handy, trust us").
This is what we do with Calculus - or in any area where rigor is used without justification.
>Then, once you have written your stuff in that dry style, you can add some glimpses of discourse that become much more valuable than if you had started with some informal hand-waving
You always start with some informal hand-waving. Not including it in your paper is, put simply, lying by omission.
And great mathematicians didn't shy away from prose, especially when introducing significant concepts. When I was trying to understand quaternions, I found all the texts I looked at stupefying - until I found Hamilton's book where he introduced them.
Not only I got more from the first chapter than I was aware there was to know, but I also learned things like where the word vector comes from when we use it to mean "a magnitude and a direction". Learning it was infinitely more valuable to me than seeing the axioms of a vector space (which, of course, you never need to remember - just write down a handful or so rules that translations in a plane satisfy, and the chances that something that fits ain't a vector space will be zero unless you go out of the way to make up a contrived example just for that purpose).
In fact, and that's Arnold's point, you lose no rigor by ditching formal reasoning when you can be concrete.
I believe that it is detrimental to the human brain to go through the exercise of "proving" that a collection of invertible operators, along with all their compositions, form a group.
And yet, this is a common exercise! People spend time on this! Just watch: [1]. The video goes for seventeen minutes! For diagonal matrices with non-zero entries!
I feel that having this "rigor" is worse than saying that these matrices form a group because of course they do.
On the other hand, no time is ever spent explaining why the formal definition of "set with an operation" is introduced. That's because it's needless, of course; it seems that the sole purpose of this definition is to create exercises.
=====
[1] https://www.youtube.com/watch?v=q_JqHQPbmUk
[2] https://math.stackexchange.com/questions/919040/proving-a-gr...
[3] https://math.stackexchange.com/questions/1108349/prove-that-...
Part 2:
=============
From the comment on that video:
>It is more fun to proof that the set of a 2 by 2 matrices with everywhere the same value x with x not equal to 0 is a group. The determinant of such matrices is 0 but it is still a group.
This is only surprising if you don't understand 2x2 matrices as operators on a plane, in which case the exercise is a cruel perversion (why on Earth would anyone want to consider such matrices, or check that they form a group, with identity that's not the identity matrix?! And how would one come up with this to begin with?!).
"Even though the determinant is zero" is a symptom of a conceptual gap. Of course the determinant being zero has nothing to do with these matrices forming a group! You can embed GL(2) into GL(3) by filling the rest of the entries with zero, and of course this will still be a group: because it acts on the XY plane in just the same way as before, and matrix multiplication still gives their composition because we defined it to work this way.
And of course matrices of the form [x x; x x] form a group. A better question would be, why wouldn't they?
Take the following kindergarten-accessible definition of a group: actions that you can undo, repeat, and combine.
Let's hand-wave the above exercise with this definition. What does [t t; t t] do? Let's take t=1. [1 1; 1 1] takes a point in [a, b] in a plane, and sends it to a point [a+b, a+b].
Doesn't seem like you can undo that, because you don't know whether [3, 3] came from [1, 2] or [2, 1]. Bummer.
Well no surprise, the operation [1 1; 1 1] smashes the whole plane into a single line spanned by [1; 1], aka y = x (the image is spanned by column vectors). We might as well ignore anything off this line, 'cause we can't tell points away from the line apart after applying [1 1; 1 1].
What does [1 1; 1 1] do on its image, the line y = x? It sends [a; a] to [2a, 2a] on the same line. So [1 1; 1 1] acts like multiplying by 2. That's certainly something you can undo.
The same works for other matrices; [x x; x x] acts like multiplying by 2x on that line. You can undo that as long as x is not 0.
And you can repeat/combine these operations because who's gonna stop you?
There is nothing left to prove.
This "hand-wavy" argument, of course, is something that gives much more understanding than a "formal" proof from the "definition". That proof is "fun" because it is surprising - and it is surprising because it doesn't make sense.
And I would argue that it's much easier to make a mistake there - and conclude that it's not a group because such-and-such axiom doesn't hold.
The hand-wavy argument, though, ultimately comes from (or would lead to) an understanding that matrices act on their eigenspaces - and little more needs to be said (given that all of those matrices share an eigenspace).
Furthermore, this gives an example of a group representation for a group of nonzero real numbers with operation defined by xy = 2xy.
Of course, such a definition is utterly confusing; why would anyone come up with such a thing, other than to torture people? Why would* one want to redefine the product of real numbers to be something else?!
Seeing people work it out on Math StackExchange [3] is painful.
The people giving the answers are confident that a * b := 2(a+b) is both is and isn't a group! This alone should tell you that at some point, rigor becomes a hindrance. This is that point.
I say, the correct answer is that if someone gives you a group without a thing that it acts on, ask for your money back.
Of course that thing isn't a group, but what breaks if we just say it is? Since nobody is giving me a refund on this, let's see how it would act on itself. An element a would act by sending b to (2a) + 2b, so we have translation and scaling. Can we undo this? Sure, we just need to shift back by (-2a) and scale down by 1/2. But scaling down isn't an option here, so tough luck.
It's not the only problem, of course; but the student is left utterly confused (again, see comments in [3]!) by this exercise, whose point seems to be that "arbitrarily messing with definitions sometimes works and sometimes doesn't".
But it feels* like this should be a group. Let's fix it. The rule "a * x -> 2(a+x)" is kosher; we can take it to be the action of a on the real line.
What does the composition look like?
well, a * b * x = a * (2b + 2x) = 2a + 4b + 4x
That "4x" there tells us that the group generated by these actions is larger than just the generating set. Nothing in our generating set can multiply by 4 (again, that would be a way to see that the rule doesn't define a group). The exploration can then go on further to examining which subgroup of the affine transformations of the real line this generates. It's interesting!
I trust you that rigor being forced on you could have improved your mathematical reasoning. But in that case, you are exceptional - or there was more to it than "do it this way just because". The most common case, in my experience, is represented in the [1][2][3] (particularly in the comments): it makes people confused, wrong, and lost.
I'd rather have them never seen a definition of a group than go through that kind of brain damage.
>I agree that you cannot teach math in a purely bourbakist style. I prefer a "visual" style like that inspired by the books of Arnold, Strang, Needham, and I am the sole teacher in my lab that seriously uses the word "amplitwist" to refer to the complex derivative :)
In that case, they might be crying tears of joy or grief over all the years they were taught otherwise :)
[1] https://www.youtube.com/watch?v=q_JqHQPbmUk
[2] https://math.stackexchange.com/questions/919040/proving-a-gr...
[3] https://math.stackexchange.com/questions/1108349/prove-that-...
> Until mid-20th century, mathematics had never been communicated in this austere manner.
Euclid.
The Elements may have been meant to go through a teacher who'd help you connect to it, who knows? But the text itself is totally definition, theorem, proof, repeat, and that's all that survived to reach us.
Great Write-up!
I linked to the Arnold essay too before i saw your post :-)
Thanks!
I ended up writing a mini-essay as a follow-up comment.
Also, would love to chat more!
> If you read any science papers, they start with a clear introduction with the aims, claims, importance and novelty of the work
I have read plenty of science papers and this is extremely far from universally true.
> importance and novelty of the work
That's there for playing the grant funding game. It's a waste for people who care about the content of the research.
The prestigious open Discrete Analysis journal provides accessible editorial introductions:
You are assuming that the paper is important or novel. Maybe they just created some dry theorems and have nothing more to day? The time I spent at a math institution tells me that many of them doesn't know or care about why or how their work matters.
Research Math is an inside joke between friends. The jokes fall flat unless you know the same people and attend the same parties.
An aside: his UC Riverside page is full of interesting stuff: https://math.ucr.edu/home//baez/README.html
Dr. Baez is a brilliant mathematical physicist, but web design and publishing is not his strong suit.
I think the page looks and is great. One column of plain text and pictures. Minimal, loads quickly, easy to understand and navigate.
It doesn't have any of the pointless bloat of most "modern" web design, and is all the better for it.
Meanwhile, on the front page of science.com: "NF-κB activation in cardiac fibroblasts results in the recruitment of inflammatory Ly6Chi monocytes in pressure-overloaded hearts"
Sometimes papers are technical and don't need to pretend that they are telling an exciting story of interest to a general audience. It isn't just a math issue.
https://www.science.org/doi/10.1126/scisignal.abe4932
This does a considerably better job of context/interest than the math example did.
It's a lot easier when all you have to do is say stuff like "heart failure is bad".
What's the mathematician supposed to do, say "group theory is cool and important"?
The thing is, mathematicians understand how cool and important it is, and that's enough. You can't really explain it to someone else -- it's like trying to explain how cool and important a piece of music is to a deaf person who doesn't know music. They see the conductor waving and say "well, that's boring." All you can do is explain "there is a whole world of beauty and meaning there. I'm sorry you can't experience it, but it's there."
The "Mathematics is Boring" author is a mathematician who seems really enthusiastic about math. He's not asking here for mathematicians to punch up their papers for nonmathematicians; he's asking them to give a bit better context for all the other mathematicians beyond the dozen others in the same sub-sub-subspecialty.
This Science paper's intro/abstract sets it out for scientists, rather than for biologists in whatever subspecialty this thing is.
Reading math can be boring (often it's not), but solving problems never is. (Math is not a spectator sport.)
I also hear people say programming is boring. This is absurd.
Maybe off topic, but solving problems is a fraction of the joy I get from programming.
Expression and personal power over reality are where I get the joy.
A painter can create world and share a feeling. An author can manifests a memory. A musician can transmit a human experience without language.
Human imaginings about magic are immemorial.
Math describes reality. Math also can describe an extrapolation further.
Programming can manifest from the descriptive language of math into the real. It can use it as a pigment for a new kind of picture. Programming helps with everything below, and it enables new things slightly above.
By "above" I mean the layers of abstraction. A programmer isn't an painter, but a programmer/painter has an additional axis of art.
Programming is our species apex of material transcendence. I don't believe it's the top of the pile, but I have no conception for what's above. Its capacity for encapsulation seems to grow as the dreams do.
Programming grows to predict, programming grows to create. Everything is just a new library, a new framework, a new environment. How long before its limits are found, so we can find the next epiphanies?
Math does not describe reality, Physics does. Math is just pure thought. If anything, Math is an abstraction of how we think about things, but not the things themselves.
This is incorrect. Mathematics also describes reality, just a different aspect of it. It has never been “pure thought.” Hence its usefulness in science and engineering.
I believe this is actually a really great point. For instance, I didn't know that it was originally Brahmagupta who started using symbols in math. His initial use of symbols to represent numbers involved using color names like "blue" and "green".
More interestingly before using symbols as variables in math Egyptians were capable of doing Quadratic equations without these variables.
If I were teaching math today I would probably teach this. I would try to do algebra without using variables, and then I would use funny words like "blue" as Brahma Gupta use to. I think that this would probably stop a lot of the questions like "why are there letters in math!?"
> I think that this would probably stop a lot of the questions like "why are there letters in math!?"
Wouldn't people just instead ask "Why are there colours in maths?"?
I don't think that this:
Is any more meaningful or edifying than:blue + blue = 2*bluex + x = 2*xYeah, I think the problem is not about variable names but giving meaning to these abstract objects. Beginners often have difficulty understanding abstract concepts and specific (concrete) examples are helpful for them to learn. I think 'Funny words' alone will not be helpful, or even detrimental to beginners' understanding as they may be distracted by unrelated concepts (e.g. what do you mean by adding blues together in this example?).
Not if you show a really long example with algebra where you are not using any variables, and show how it can be simplified with variables.
I don't recall where I read this, but a young Feynman motivated himself to learn trig by imagining that he had been challenged by a mysterious stranger to answer riddles. For example (I'm making this up) "You only have a protractor and the ability to measure your paces - tell me the height of that flagpole!" The operation, then, is to pace out a distance from the base of flagpole, sight down the protractor to the top of the flagpole to get an angle, and compute. (Since tan(y/x) = a, arctan(a) = y/x, y=x * arctan(a)). So the motivation was imaginary and concrete. And it's dramatic, because there's an obstacle, a chance of failure, and a chance for glory.
I can't help but see a parallel with magicians, who can dazzle us because they are willing to go further than most of us, in terms of practice. In the same way, math gives you the ability to dazzle with surprising answers, to do a lot with a little.
I think they’re doing this more in schools. I was helping a first grader with homework recently and I was surprised the math worksheet was essentially simple algebra but with a shape or a little picture to represent the variable instead of a letter:
> 3 + circle = 7
> 10 - cloud = 4
It seems like a perfectly reasonable exercise for a first grader. Clouds and puppy faces and circles. But I have to admit if I’d seen “x” in place of the symbols I’m not sure I would’ve thought it was so reasonable.
The introductory arithmetic I've seen recently even kind of flips it around and defines subtraction in terms of algebra. "7 - 5 = ?" is presented as "5 plus what equals 7?"
That's how subtraction is actually defined, so that's not a bad idea.
Mathematics is NOT Boring; the teaching of Maths divorced of Real-World Applications is what is Boring. An over-emphasis on Formalism/Abstraction is what is killing people's interest in Maths/Sciences.
The Teaching of all Maths/Sciences should always start with a Real-World motivating example and then introduce the Maths as necessary to Solve it.
In this context see V. I. Arnold's essay; On Teaching Mathematics - https://www.uni-muenster.de/Physik.TP/~munsteg/arnold.html
Quote from the above article:
* Attempts to create "pure" deductive-axiomatic mathematics have led to the rejection of the scheme used in physics (observation - model - investigation of the model - conclusions - testing by observations) and its substitution by the scheme: definition - theorem - proof. It is impossible to understand an unmotivated definition but this does not stop the criminal algebraists-axiomatisators.
* What is a group? Algebraists teach that this is supposedly a set with two operations that satisfy a load of easily-forgettable axioms. This definition provokes a natural protest: why would any sensible person need such pairs of operations? "Oh, curse this maths" - concludes the student (who, possibly, becomes the Minister for Science in the future).
* We get a totally different situation if we start off not with the group but with the concept of a transformation (a one-to-one mapping of a set onto itself) as it was historically. A collection of transformations of a set is called a group if along with any two transformations it contains the result of their consecutive application and an inverse transformation along with every transformation.
The author says that mathematicians find math outside their own field to be boring and difficult to understand. As a mathematician, I think he's rather missing the point:
- mathematics is boring to everyone right up until the moment you need it. Then suddenly it becomes very interesting.
The way mathematicians typically read papers is not by randomly picking through recent submissions to the arxiv and dutifully reading everything they come across. Instead, they stumble on a hard problem in their own research which they don't know how to solve, and they search to see if anyone else has worked on it before. The paper you would have discarded as pointlessly abstract or ridiculously overspecialized just yesterday suddenly reads like a riveting novel today. No amount of creative writing tips would have made it any more interesting to you yesterday - unless the writers happened to anticipate the exact reason you would end up becoming interested in it ahead of time.
> mathematics is boring to everyone right up until the moment you need it. Then suddenly it becomes very interesting.
That might be true for higher level math, but anything at the graduate or undergraduate level has been already curated to be interesting.
I find that the older I get the more I appreciate math. I did find it boring when I was younger. I'm not sure if it is for folks like mathologer and 3brown1blue. I tend to think visually and they do a wonderful job in that area. I don't recall anyone presenting math like they do when I was in school in the 1980's.
To me it's the language.
You have to learn a whole new alphabet and signs.
This is done for the sake of quick communication between mathematicians, but it's necessary to make a study and see the pros and cons.
While it's true that it makes communication faster and straightforward it keeps so many people outside of the field.
Maybe the field would benefit to go more towards philosophy and logic, explaining it with words.
The idea that unfamiliar symbols and alphabets are a huge problem for the accessibility of math is common. As physicist I do not agree. Math is hard. It's damned difficult. Symbols and alphabets are the least of your concerns when dealing with a math paper. I know a lot of these symbols by name, I sometimes understand the notation or could familiarize myself with it but the math itself? Nope, no chance, usually. If one cannot deal with the symbols, there is no chance in hell one could deal with the ideas.
I'll disagree. I read many papers with mathematics in them, and I get a lot of the concepts but the symbology used doesn't make sense to me so its hard for me to understand what is exactly going on. The sentence after the equation that explains each symbol is necessary for me, and many others as well. Not everyone has taken 8 math classes to know each kroniger delta by heart.
> I read many papers with mathematics in them
That is the problem, non-mathematicians usually doesn't fully understand the math they use in their papers and thus the math becomes opaque. Papers written by real mathematicians are usually much easier to read, although of course the math in them is much much harder.
Well, musical notation looks like gibberish to someone who did not learn it. That said, I do agree with you 100% on scientific papers. Without an explanation of the formulas to cater to a wider audience a lot of papers fall into the "and then a miracle occurs" fallacy. Not because that's what they actually do. Not at all. I say this because to a large set of readers the impenetrable math has to be taken as a divine act that moves you from step n to n+1.
I remember going to lunch with one of my math professors in college. He was working on his PhD and was about to publish his thesis. As we sat down to eat he was very excited as he pulled out a sheet of paper from his pocket. It had been folded 3 or 4 times. You could tell he had been carrying this thing around, folding and unfolding it, for a long time because the folds showed wear.
This piece of paper was full of formulas, both sides, there was not a single blank area on the entire sheet.
He unfolded it and proceeded to give me a quick talk about what he was working on. He was very excited about it and I was happy for him. And yet that entire piece of paper looked like a language from another galaxy to me. I was on my third Calculus course. I had no clue what he was talking about.
Digressing a bit:
To this day I remember this when helping my kids with math, science and coding. As a matter of fact, I am currently working on an explanation of exponentiation and logarithms. In both cases everything looks great if things are even multiples of the base. The minute you do something like 2*2.1 or log_base_3(35.53) you hit the "and then a miracle occurs" problem, where you have to explain a thing by using the thing ("A white horse is a horse that is white").
I've spent the last couple of days working on cleaning-up an explanation of these things that makes sense without using a miracle to get to the answer. One of the problems is that there are natural explanations for things like square and cube (area and volume), but, what do powers of 2.1 and 3.25 mean? It is interesting how things completely break down. I don't think I have found a single mathematics text that bridges this gap.
If anyone has a sensible explanation of this I'd love to hear it!
Following on from my other reply...
When we start teaching math to students, we start with counting blocks: "You have 2 piles of blocks, one pile of 3 and another pile of 2. If you put them together, you get a pile of 5 blocks!"
That stops working as well when you deal with fractions. You can get away with 2.5 blocks, but 2.5 blocks is really 3 blocks, but one is a little smaller than the others. And at some point you can't use blocks to represent 2.3456 blocks. So you need different kinds of "natural" problems to represent those numbers.
But, as you point out. There are some things that aren't really representable as "natural problems". For a long time the idea of 0 wasn't natural. (People were actually killed for talking about the idea of 0) I mean, what does it mean to have 0 chickens? You either have some chickens, and you say "I have N chickens", or you don't have any chickens and you say nothing. Why would you need a number to represent nothing?
Maybe n^2.1 doesn't have a natural explanation. At least, not one you can hold in your hand. Can you imagine a shape with 2.1 dimensions to relate it to geometry? Probably not. But you can use geometry to prove that n^(a+b) = n^a * n^b and then you can apply those rules to "unnatural values" with an understanding of what is happening. The natural explanation of n^2 can be applied to the unnatural idea of n^2.1
Everything in math can't be understood with geometry or "natural examples", lots of math (most of math?) describes things that are not representable within the constraints of our physical world. That's what makes it so powerful!
Also, not everything in math can just be calculated (see: irrational numbers)
> For a long time the idea of 0 wasn't natural.
Yes! A long time ago I read a wonderful little book on just this bit of history:
https://www.amazon.com/Zero-Biography-Dangerous-Charles-Seif...
You can start explaining fractional powers with roots, e.g. x^0.5
Right, the problem is that you quickly run into the "miracle occurs" territory.
The square root of a number takes us from an area to the length of the side of the square corresponding to that area. The cube root is the same for a cube. What is the 10th root of x?
It's a number that, when multiplied by itself ten times equals x.
OK. How do you compute this number?
The best I can offer at this point is, for simplicity, a brute force search or, for faster results, a bisection search algorithm.
In other words, the "and then a miracle occurs" moment is right there. The fact that I can key these numbers into a calculator and get the answer isn't the kind of explanation I want to use for my kid. I don't want to say "once you get here you pick-up your calculator", because the legitimate question then might be "If it's magic, why don't I just pick it up at the start of the problem?"
To be clear, I don't mean "miracle" as anything other than "this shit is hard-to-impossible to explain or calculate by hand". That said, you could probably run through a quick bisection search by hand and likely converge on a low error answer in 2 to 5 cycles.
The meaning of of the e root of b explained with exponentiation and the exponentiation is explained with the root.
I think the magic/miracle of math is that you can go from "real world" into "math world" then back into "real world". If a rule is true for c and n and n+1, and you can physically represent the idea when n=2 and n=3, then you can apply that representation theoretically to n>3 to understand ideas that are not easily understandable.
The 10th root of x takes you from a measurement of an 10 dimensional object to the measurement of a 9 dimensional object. That's crazy, right? Without needing to "understand" what an 10 dimensional object is, you know something about it because you understand what roots mean with lower values...
Of course, that doesn't help you actually calculate the 10th root of x. Is there a better way than basically guess, check, and refine? The calculator is just really fast at doing that (and only needs to calculate a relatively small number of significant digits). Sometimes that's just how math is. The only magic there is that computers are very fast at computation compared to people.
>The 10th root of x takes you from a measurement of an 10 dimensional object to the measurement of a 9 dimensional object.
Doesn't it take you from 10d to 1d? For instance, 10^10 is the hypervolume of a 10-cube with all side lengths = 10.
Imagine you are trying to explain this to a 15 year old.
If math is going to make sense to kids we can't resort to explanations that sound like "and then a miracle occurs".
BTW, I am not being critical of your answer. What I am saying is that there are these corners in seemingly simple math that have me scratching my head when it comes to explaining the concepts to a kid in a manner that makes sense and isn't circular. I have yet to find good answers to these questions.
Kid: What does the 10th. root of n mean?
Dad: It's the number, let's call it x, that, when raise to the 10th power is equal to n
Kid: So: n = x * x * x * x * x * x * x * x * x * x?
Dad: Yes! You got it!
Kid: How do you calculate it?
Dad: Well...
Kid: What if it is the 10.1 root of n?
Dad: Well, that's a little different...
Kid: How?
Dad: It's the number than when raised to the p-1 power times the base raised to the fractional portion of the power is equal to n
Kid: What's the fractional portion?
Dad: For the case of p = 10.1, it's 0.1
Kid: x * x * x * x * x * x * x * x * x * x^(p - int(p)) then?
Dad: Yeah.
Kid: How do I calculate x to the 0.1 power?
Dad: Well, you could use your calculator...(now starting to sweat)
Kid: How does the calculator do the math. You know, like when the math teacher says "Show your work"
Dad: Well, you could use logarithms...
Kid: What are logarithms?
Dad: A better method could be to use Newton's method. Here:
https://en.wikipedia.org/wiki/Newton%27s_method
Kid: It says: "start with an initial guess which is reasonably close to the true root, then to approximate the function by its tangent line using calculus, and finally to compute the x-intercept of this tangent line by elementary algebra"
Dad: Yes...
Kid: I don't know calculus. Is that the only way? I just wanted to understand how to calculate the 10th root of a number?
Dad: OK, let's try this. I just threw it together:
Kid: So...you are telling me to guess?# Calculate the exp root of n using a binary search # def root_binary_search(n, exp): # Return b, which is the exp root of n # b**exp should be equal to n # min = 0 # For exponents < 1 the max needs to be sufficiently large max = n if exp < 1: while max**exp < n: max *= 2 max_error = 0.00001 while True: b = (max + min) / 2 b_exp = b**exp error = abs(n - b_exp) # print(f"min: {min:15.4f} max: {max:15.4f} b: {b:15.4f} b_exp: {b_exp:15.4f} n: {n:15.4f} error: {error:5.8f}") if error <= max_error: return b else: if b_exp > n: max = b else: min = b # Tests print(root_binary_search(4, 2), f" result should be: {4**(1/2)}") print(root_binary_search(16, 2), f" result should be: {16**(1/2)}") print(root_binary_search(5, 0.1), f" result should be: {5**(1/0.1)}") print(root_binary_search(2, 10), f" result should be: {2**(1/10)}") print(root_binary_search(4, 0.25), f" result should be: {4**(1/0.25)}")Dad: Yeah...? (looking embarrassed)
Kid: And to accept an error? 4-squared is 256, not 255.998046875?
Dad: Well, you have to understand that with a binary search...
Kid: And, did you see what happens if I run this case?
Kid: Dad?print(root_binary_search(4, 1), f" result should be: {4**1}")Dad: I have to get back to work. Why don't you ask your math teacher tomorrow?
I think you missed (or at least aren't building off of) the point of my comment.
I'm not questioning the pedagogy in the original comment, just the specific math. x^(1/10) takes a value of dimension [length^10] to a value of dimension [length].
Interestingly, I think you could take this in a few aesthetic directions. From a pure math perspective, this is where you can start talking about set theory, cardinality, etc. Irrational numbers are infinite sequences of digits we can only approximate. From a computer science perspective, you can talk about Newton's method, and also make the argument than an algorithm which converges to a number is a quite meaningful way to describe that number. Some would also add a caveat of 'efficiently' converging. And combining the two perspectives together, you can discuss that the set of computable numbers are of a lower cardinality than the set of reals -- aka 0% of real numbers are computable. You could also look at things from a geometrical perspective, and show how roots higher than square roots are tied to higher dimensions are are nonconstructible in the plane (this might be very hard to show!).
I understand what you are saying, believe me. I am trying to keep it simple because the objective is for the child to walk away with a useful non-scary answer that gives them a sense of proportion with which they can approach thinking about these things.
Anyone who has tried to teach a child math is familiar with just how hard it can to have them understand seemingly simple concepts. Simple example unrelated to powers/logs/roots. It took me about half an hour to explain how you can shift a parabola right and left by simply adding or subtracting a constant from x in the simplest form y = x^2. The fact that it moves in a direction opposite the sign caused even more confusion. It took telling the story in five different ways before the "aha!" moment happened.
The relationship between exponentiation and logarithms is another one that gets fun once things are not nice and even. Exponentiation is sequential multiplication and logs sequential division. Sounds good, until you can't multiply or divide by the base any more.
I find it interesting that in all of my searching I have not found a simple approach to explaining these things to children so they can build a tangible sense of what's in front of them.
That said, if the kid understands coding, yes, you can use programs to have them explore how things might work, create solutions, understand errors, estimation, etc. More the reasons to perhaps teach coding and math in parallel and to the same level of importance in schools.
>I understand what you are saying, believe me.
Do you? Almost nothing you've said has any relevance to my original comment.
Yes, I do. You can't approach kids with the kind of explanation you are proposing without their eyes glazing over. My kids are very comfortable with STEM and you still have to be careful. You cannot assume this to be the case with the general school population.
No, you still don't understand, because I'm not proposing an explanation!
I even clarified: "I'm not questioning the pedagogy in the original comment, just the specific math."
And yet, if you go back and look at this thread, what I am looking for is, in fact, an explanation that will work well for children. I thought I was explicit enough. I apologize if I was not.
Quote:
"I've spent the last couple of days working on cleaning-up an explanation of these things that makes sense without using a miracle to get to the answer. One of the problems is that there are natural explanations for things like square and cube (area and volume), but, what do powers of 2.1 and 3.25 mean? It is interesting how things completely break down. I don't think I have found a single mathematics text that bridges this gap.
If anyone has a sensible explanation of this I'd love to hear it!"
In other words, I have no use for anything else as it quickly becomes an irrelevant time sink given the stated goal: Trying to explain this to children.
If you can translate what you wrote into something that can be taught to an average teenager (meaning, not a mathematically gifted or advanced student), you might just have the answer.
So far the only explanation I have found for how to solve these kinds of problems is successive approximation by guessing the answer. One level up from there is to use various algorithms to do the guessing, either on paper or through a computational solution (which requires a reasonable level of comfort writing code or using something like Excel).
I do want to thank you for taking the time to contribute to the conversation. Be well.
except that mathematicians like to use shortcuts notation everywhere, shortcuts that only them understand... For example P(A|B,C) ?= P(A|B;C).
Moreover, mathematician seems driven by a frugal principle. They try to condense their though in the smallest number of symbols. To me it's like writing a Perl program with the shortest amount of text. Of course, the result is right, but it's super hard to understand.
Yeah, it's like there is something in their brain which differs from non-mathematicians.
Non-mathematicians discover something and then want the largest number of people to understand that thing as well, and they want to be the ones explaining it and see the sparkle in the eyes of the ones receiving the information and "getting it" for the first time.
Mathematicians want their peers to understand first and foremost in the quickest way possible, they then rely on 3rd parties to explain it to people who they consider "normies", they slap their name on the theorem or the demonstration and further use what they consider "subordinates" to explain it to the few "normies" who want to make an effort to understand it.
Most mathematics paper will define the symbols they use beyond the basics (and sometimes even the basics). If you are thinking about extremely common symbols then... it's like complaining that somebody not trained in music cannot read a music sheet.
One can make the same argument for doing arithmetic using Roman numerals.
Math will only emerge as a field when people will stop treating it as something that one only has to do at the frontier.
In other competitive fields such as banking, basketball and football you have the higher ups caring about the pyramid below them, if only as a place to recruit new talents.
Among the math higher ups, only Jim Simons cares about the "math pyramid" so to speak.
One has to be pragmatic, the goal of getting the population interested in math is GDP and median quality of life.
I know those things are very mundane for mathematicians who are absorbed in their world trying to be the ones cracking the Rienmann hypothesis, but even as that individual you have slightly better odds at making it if your surroundings look like Zurich or Cambridge vs. Baltimore or Mobile.
Matter of fact you have better odds if your country can extend the areas looking like Zurich and Cambridge and reduce the areas looking like Baltimore or Mobile.
What are you talking about? Mathematics has well and truly "emerged as a field" lmao
I think you'd be surprised if you gathered data on amount of cursing which goes on in colleges when students have to face math vs. English or history
Also cortisol levels spike before a math exam compared to again English or history.
Notoriously math is the most hated subject/field. There is no point trying to hide it.
Unless one is a masochist, then it's desirable to be popular, and although the field cannot and should not be watered down for the sake of ease of access , I think there is lots of room before reaching that point, but nobody is interested in making such effort
What's this got to do with "emerging as a field"? Yes Maths is hard, yes generally people don't like to do it, but that doesn't mean it hasn't "emerged as a field".
It's self imposed because mathematicians love to use their own notation.
Math is at its core philosophy and logic, they are both hard too, but they don't get the same level of hatred that math gets because unlike mathematicians philosophers and rational thinkers take the time to explain what goes through their mind instead of condensing very complex thoughts in 20 characters strings.
Turns out there are some social rules which not even math can break, if your attitude is :
"I don't give a damn about people understanding what I am trying to say, they are all dumb and uninteresting because they don't even put the time in to learn my special notation"
You won't be very popular. And your field will be kinda hated, which is what is happening.
> [philosophy and logic] don't get the same level of hatred that math gets
Because children aren't forced to study philosophy and logic for many hours each week.
And if you think logicians (even of the non-mathematical variety) aren't constantly inventing new notations...
> "I don't give a damn about people understanding what I am trying to say, they are all dumb and uninteresting because they don't even put the time in to learn my special notation"
Those mathematicians aren't calling you "dumb and uninteresting", you're just not in their target audience.
> And your field will be kinda hated, which is what is happening.
Again, it's hated because children are forced to study it, because their parents think it's essential. Most people don't think mathematics is an evil cabal of men intentionally obfuscating their work with crazy symbols just for the heck of it.
What has any of this got to do with "emerging as a field"? Mathematics is alive and well and, indeed, popular. It's not like Mathematics lectures are empty or that huge numbers are shying away from engineering or physics because of the Maths involved. Like, it's doing fine, and clearly "emerged as a field" centuries ago.
I don't give a fuck if people hate Maths, it's doing fine lol. You sound like someone who has literally no connection to the field and therefore can not perceive how it is actually doing, and instead you think "wow Maths must be in a bad state if everyone hates it". But no, that is just not true...
If there is any relevant criticism of Mathematics it is much more about going extremely deep into vastly theoretical domains without any reference to how practical such solutions might be than anything to do with whether or not the lay person "enjoys Maths".
All of this is irrelevant bullshitting. Maths emerged as a field centuries or even millenia ago. Please read the wiki page on "Mathematics".
Well, one of the GOAT mathematicians of our time, Jim Simons thinks math in America is in a bad shape.
And same goes for Terence Tao as he was unsatisfied with the low degree of collaboration between mathematicians and stepped up saying (politely) :"this sh-t has to stop"
Of course the rampant ego which is in the field prevents people from rallying around leaders like that or even arriving indipendently to the same conclusion because all these people see is the di-k measuring contest or slapping their name on a theorem.
You speak about practicality of math, what's more practical than turning Mobile, AL into a Cambridge or a Zurich.
If people in math cared about explaining their thoughts the way philosophers and rational thinkers do, then it could be possible.
Tbh, those people are the ones with huge egos. Mathematics is getting done on the raw ground regardless of if they are present or not. I hate this whining bullshit, if you want people to collaborate, make the tools to help people do so, rather than cringy pointless posturing.
> turning Mobile, AL into a Cambridge or a Zurich.
You are extremely naive if you think that this is something that would happen by raising general mathematics literacy. Nor is it something that we should necessarily even want to happen. Stop living in a dream land, not everyone needs to or should be a mathematician. We should aim for general literacy in statistics at most.
> If people in math cared about explaining their thoughts the way philosophers and rational thinkers do, then it could be possible.
Well, mathematics is in a much better state academically than philosophy, and literally in the UK we have ZERO philosophy education in the entirety of school. Philosophy hardly seems like an ideal to strive for, the average person know even less philosophy than mathematics and academically it is far smaller and less well funded.
> Maybe the field would benefit to go more towards philosophy and logic, explaining it with words.
Interesting perspective.
I studied Philosophy and Logic in university:
https://en.wikipedia.org/wiki/List_of_logic_symbols
Much of it was familiar to me because, earlier, I took a class my Physics professor insisted we should take, as he put it, "if you want to get out of the dark ages": Programming in APL.
AS it turns out many of the symbols used in APL come from Logic.
To this day I find it disturbing that Python uses "^" for bitwise XOR, because both in Logic and APL, this is the symbol for AND. Anyone who studied Logic instantly recognizes the APL logic operators.
I say "interesting perspective" because the reality of what you are asking is precisely opposite what you think the outcome would be.
> https://en.wikipedia.org/wiki/List_of_logic_symbols
Among all those symbols only ">" and "<" are somewhat intuitive, all the others you have to learn what they mean.
Even "=" is derivative of "<" and ">" because by reasoning you can understand that you get to it by rotating the 2 lines about 30 degrees after realizing that you are dealing with 2 numbers which are in fact the same and not one being bigger than the other
Yes, of course. BTW, I don't think my comment covered the fact that I agree with you 100% on the impenetrability of mathematical notation.
That said, all notations --including the written alphabets of many spoken languages-- are impenetrable until you learn them. As a personal example, for me, learning French and German was a million times easier than learning Chinese and Japanese. In the first two cases I could read and write the languages right away. In the case of the latter two the notation imposed both a significant time drain and a cognitive load that got in the way of learning. I did a lot better with Japanese than Chinese. And BTW, I would not dare say I know these two languages. I can rattle off a bunch of phrases in Japanese and understand them if spoken slowly. My brain has yet to synchronize to Chinese.
My point is that specialized notations have been a part of the human experience forever. From cuneiform to modern written languages. Our brains are pretty good at learning notation. I would not fault mathematics for anything other than, perhaps, practitioners assuming everyone reading a math-heavy text understands the notation as they do.
Personal example: One of my kids is going though an MIT CS class on edX. He got scared when he was presented a formula with a huge sigma "Σ" sign in front of it and numbers below and above it.
It took less than a minute to explain that this just means a sequence of sums, maybe ten seconds. I just wrote down something like: "(a0 * b0) + (a1 * b1) + ... + (an * bn)" and said: "This is what it means. Summation". Done.
The point is, notation doesn't have to be hard.
> It took less than a minute to explain that this just means a sequence of sums, maybe ten seconds. I just wrote down something like: "(a0 * b0) + (a1 * b1) + ... + (an * bn)" and said: "This is what it means. Summation". Done.
I think the real world feedback is quite different, given that math can be explained textually with words , why should we not do it?
The burden of the proof is always on the institution trying to do something. In this case the US government trying to make the US population better at math.
The population is quite okay with the present day situation, it's the government's job to make stuff happen and change things around to obtain the desired result, that is an improvement compared to what we have today.
Math proficiency is in line with new notation foreign languages proficiency from your examples (Chinese, Japanese, Austrian and German to a certain extent), that's because as you said both math and those languages have a different notation.
Given that (unlike foreing languages) math can be explained WITHOUT having to teach a new notation, then why don't we do it?
New notations are necessary for Chinese, but not for math, so why don't we remove this barrier to entry?
New notation is part of the human civilization but it has to be acquired early on to become like a second skin, which is what Latin letters are for us.
One has to be realistic . Mathematical notation will always take the backseat vis-a-vis literal notation. Kids just don't learn (and aren't taught) mathematical notation the same way they learn (and are taught) latin letters.
Instead of fighting against windmills we should take that as a given and try to influence what can be influenced.
As I said the institution trying to make a change in end results, must consider changes in the process...otherwise nothing happens.
I think the notation is very much needed because it quickly becomes a tool for thought and communication. This is very much the case for every spoken language and other areas, such as music. Your point, which is quite correct, is that the math might not be explained well enough and internalized to the extent where the notation becomes a language for students beyond the simplest levels of mathematics.
A kid can learn the notation for whole, 1/4, 1/8, etc. musical notes and their positions on the staff very easily. An immediate relationship is created to the key on the piano or the fret on the guitar. I have been to math classes where the professor simply vomits formulas on the blackboard for one hour and you are left to figure out what they hell happened. That is a problem. Not the notation. The way math is taught.
> A kid can learn the notation for whole, 1/4, 1/8, etc. musical notes and their positions on the staff very easily. An immediate relationship is created to the key on the piano or the fret on the guitar
I don't know about that. How many people can read music?
But also music, much like Chinese and Japanese is at a comparative disadvantage compared to math because there are no other tools to explain how high or low a note is.
You can use words to explain math, just like you did with your son.
Internalization is key, if you miss the window as a kid then it's gonna be an uphill battle and life being complicated as it is would mean people giving up on it.
And real world feedback is telling us that such window will be missed, that's why I had thought about math being always explained using the familiar English language which is almost always never missed as a kid.
> Given that (unlike foreing languages) math can be explained WITHOUT having to teach a new notation, then why don't we do it?
Can you explain, say, orbital mechanics, without math notation? In a way where someone can determine where a satellite will be at a particular time given its position and velocity at a prior time taking into account disturbances to the ideal orbit caused by the Moon and Sun (we'll stick with just those 2 and pretend the Earth itself is a perfect sphere).
I don't mean explain in a pop-sci sense. That's actually feasible with very little math (though you will probably want some diagrams), I mean explain in a way that the audience can then apply this math-but-not-in-math-notation to solve real world problems.
Again, you are assuming that math is only done at the frontier.
I don't care about the frontier, I care about improving standards of living and quality of life, and that you can do by moving the needle in a concrete manner for HS and college math proficiency.
Not to mention that the satellite operations you mention will benefit a lot thanks to a higher standards of living/quality of life which are synthetized in the GDP metric.
One can only imagine the GDP growth that would happen if math proficiency levels were to suddenly become on par with coastal China.
At that point the satellite operations you'd speak of would become much smoother without even needing to move the math frontier forward.
You'd see collapsing costs everywhere ranging from personnel, raw materials, building operations, security and so forth.
I think it would be better to just get away from ultra terseness. It's crazy to me how terse mathematics is compared to CS.
def velocity(time_ms): return ...
vs.
v(t) = ...
Like nearly every operation and variable is one character or symbol long (with the puzzling exception of trig where you get a whopping 3 characters - sin/cos/tan/etc.)
I really don't want to write a complete word hundreds of times when I am solving an equation on paper.
At least that idea is better than the other way some people want to "improve" math notation: drop it and write everything in "plain English". Like anyone who does even basic algebra would really benefit from that.
"The position of a particle at some point in time is its original position added to the product of its initial velocity and the time added to half the acceleration times the square of the time. Now, if I tell you the initial position, initial velocity, current position, and acceleration, how much time has passed? Remember, you can't use algebra anymore because we banned it in favor of 'plain English', also good luck communicating your ideas to people who don't understand English."
That's actually quite close to how Newton's Principa was written. If you want a real challenge, go try and read it... and if you really want a challenge, try to read it without already understanding mechanics!
Use short hand with pencil and paper. But make it less terse inside textbooks, inside computers, etc
I think many people who start finding mathematics interesting at an older age and blame the math education in young age missed that their intellectual capability also strengthen with their age. The whole point of something being "interesting" is that this thing is possible to be understood but not that easily.
It's boring but usually everything is well-defined and hence well-understandable.
Not the case for most CS papers.
So many of the difficulties in reading many papers (well, disciplines) boil down to adding links to things the first time you mention them. Good academic writers do this, bad ones don’t. The gist of this paper is that assuming all knowledge prior to your development is very limiting, not only because far fewer people can read it, but because every time you don’t introduce knowledge you also skip over part of a story. Very well, but I think you are correct that CS’ problem is pretty much isolated to the first point, because nearly all papers get to talk about real world applications of the research and that’s the story covered.
A big problem for CS papers, particularly in PL (programming language research), seems to be heavy reliance on assumed knowledge of Greek letters and notation in the very field-specific way you like to use them. People would understand your paper if only they had read your previous three, which alluded to what you might have meant by these Greek letters, but only by figuring out the two citations they each have in common. If you are bad at pronouncing Greek letters and it’s a PDF so you can’t copy them, you can’t even google what you see. Even if you could, it wouldn’t help. Notation is ten times harder to search for.
(I have never, ever had this problem reading a law paper, not even slightly, not even once.)
There’s an interesting demo here from Will Crichton about how to prepare better documents for conveying understanding in PL. He has a thing to show you the “read as” on hover. https://twitter.com/wcrichton/status/1442891297333800966 https://willcrichton.net/nota
It's hard to understand , not because it's boring, but because it's inherently hard and opaque. If mathematicians tried to make complicated topics easier to understand the papers would be 10x as long.
The title should be "why mathematics papers are boring, how to spice them up with narrative", that is what the article is about.
>how to spice them up with narrative
Oh, Lord; NO!
I would like see all Human Narratives/Unnecessary frivolities/Assume-reader-is-a-Idiot language banished from the Teaching of ALL Maths/Science.
What we we need is a focus on the direct teaching of Principles along with their Real World Applications.
I follow the author on Twitter, and really enjoy his exposes of mathematical concepts. I highly recommend him: @johncarlosbaez
what is the opinion of 3 brown,1 blue
Ah, in math writing, it's easy enough to say more and be not boring and at times be at least interesting, inviting, even exciting!
Let's have some examples!!
(1) Dimension.
So, suppose we are in the first class in linear algebra:
"Maybe you have heard that the real line has 1 dimension, is 1 dimensional, the plane is 2 dimensional, and the space we live in is 3 dimensional. Well, that's all true enough, but in linear algebra we do better and have more: For one, we get to say clearly what is meant by dimension, that, in particular, why the line, plane, and space are 1, 2, 3 dimensional. For much more, for any positive integer n we have n-dimensional space.
Next, in linear algebra n-dimensional space is a relatively easy generalization of what we already know well in dimensions 1, 2, 3.
Why might we care? For example, we know well what distance is in dimensions 1, 2, 3, and distance in n dimensions is a straight forward generalization. In dimensions 2 and 3, we understand angle, and also that carries over to n dimensions. For more, with computing it is common to have a list of, say, 15 numbers. Well, for just one benefit, with linear algebra we get to regard that list as a point in n = 15 dimensional space, and doing so lets us do some powerful things with representing and approximating that list."
So, we get some sense of previews of coming attractions and some invitation to higher dimensions.
(2) Optimization.
"There is a subject, with a lot of development just after WWII, called linear programming (LP). The programming is in the English sense of operational planning as in war logistics and planning as was crucial in WWII. The linear is the same as in linear algebra.
The main goal, point of LP is to find how to exploit the freedom we have in doing the operations, the work to be done, to get the work done as fast or cheaply as possible, that is, to find an optimal way to do the work.
So, the subject LP is part of optimization. There have been some Nobel prizes from applications of LP and other math of optimization to economics. There have been applications of LP to feed mixing, oil refinery operation, management of large projects, and parts of transportation."
(3) The Simplex Algorithm.
"Maybe in high school algebra you saw the topic of systems of linear equations. Well, it is fair to say that the standard way to solve such a system is Gauss elimination due to C. F. Gauss.
The idea is simple: Multiplying one of the equations by some non-zero number and adding the resulting equation to another of the equations does not change the set of solutions. So, doing that in a slightly clever way results in the system of equations with a lot of zeros, about half all zeros, so that the set of solutions is obvious just by inspection.
Then for linear programming, in practice the main solution technique is the simplex algorithm, and it is just but done with optimization in mind."
(4) Completeness.
A rational number can be written as p/q for integers p and q. We will see, easily, that the rational numbers are not up to carrying the load, are not up to doing the work we need done. So we need a more powerful system of numbers -- we need the real numbers.
Here is a really simple place the rational numbers fail to do what we want: At times we consider square roots. E.g., the square root of 9 is 3. Well, what is the square root of 2? Suppose that square root were a rational number, i.e., so that
(p/q)^2 = 2
Then we have
p^2 = 2q^2
so that the left side has an even number of factors of 2 while the right side has an odd number. Tilt. Bummer! That can't be. That's a contradiction.
So, there is no rational number that is the square root of 2. So, for something really simple, just finding a square root, the rational numbers fail us, can't carry the load or do the work.
The real numbers will let us find the square root of 2 and much more. With the real numbers we get what we call completeness. A joke, basically correct, is that calculus is the elementary consequences of the completeness property of the real numbers. Then we generalize: Banach space is a complete normed linear space. Hilbert space is a complete inner product space. The Fourier transform works because of completeness. So, we move on and see how the real numbers are complete ...."
These are fine intros, but then you have to actually dive into these topics. Sometimes there is no real interesting way to explain these topics. Fx, iirc the construction of the real numbers is rather tedious. But I agree, that more effort could be done to motivate many of these topics (at least that was my experience studying math)
"Sometimes"? Yup!
"Tedious"? Yup! Can use Dedekind cuts or maybe something called the normal completion, and especially the first is darned tedious!
There is a good math writer G. F. Simmons, of Introduction to Topology and Modern Analysis, who stated that the two pillars of analysis were linearity and continuity -- nice remark. He also stated that really to understand, have to chew on all the arguments, etc. or some such.
Then I decided to study the proofs really carefully, so to "chew", and in hopes of finding techniques I could use elsewhere. When I mentioned that study technique, objective to my department Chair his remark was "There is no time." -- he also had a point.
Commonly there is an intuitive explanation of what is going on and some views that can provide motivation to study the stuff at all.
There are a lot of books and papers. As a student, I saw a lot of the books, got copies of some of them, put them on my TODO reading list, etc. Eventually, after falling far enough behind on the list, I wondered just where all those books were coming from? It dawned on me, profs need to publish so they do. They are also supposed to have grad students and do. Then the grad students take the advanced course by their major prof and end up with a big pile of notes. Then the grad student, as an assistant prof, wants to publish so cleans up the pile of notes and contacts the usual publishers to publish a book. The top university libraries are essentially required to buy the books, so they get published and bought. And, then, often, there the books sit, gathering dust. I won't say that writing those books was a total waste, and I won't say that students should spend more time reading those books. Or, the books are there on the shelves. They are not really difficult to find. The books have work that was done. Maybe the work is useful now; maybe someday it will be useful; whatever, the work is done, the results found, and there in case they do become useful.
In the meanwhile, back to the mainline of math education, research, applications, usually there can be some helpful intuitive explanations and motivating example applications!
Apparently some authors just give up and assume that their books will mostly just gather dust. But once I wrote Paul Halmos, likely my favorite author, and got back a nice letter from him with "It warms the heart of an author actually to be read, and clearly understood, by ordinary humans." -- at the time I had no academic affiliation and was just reading his book on my own. So, Halmos was surprised that an ordinary human would be reading and understanding his book.
Ah, in what I wrote, I left out that also in linear algebra in n dimensional space, the Pythagorean theorem still holds, that is, an n dimensional version holds!
Mathematics is not boring to those interested in it and pursuing it. It is a universal language.
Adding unnecessary complexity will take away from its "purenesss" and terseness. Language is not a barrier to entry.
You will then be graded on incomplete formulas but great storytelling.
Let's leave the storytelling to all the other fields of life.