Inequalities, convergence, and continuity as "special deals"
terrytao.wordpress.comThis was interesting to me, and overlaps a lot with different things on my mind the last few years.
The other day I was talking about something related with my daughter, for example, who was learning about rounding in elementary school — we ended up in a discussion about accuracy in calculations versus "number of operations" very vaguely speaking (in elementary school terms), and the tradeoff, and how you're always rounding at some level in practice, so that tradeoff always exists at some level.
I also do research in information theory, and somehow the topic Tao discusses seems related. In that area, there's always some potential or actual loss of information due to information and computational constraints, things are always being discretized, and some representation always has some information cost. What Tao is talking about is an information cost, but cast in terms of numerical accuracy rather than stochastic terms.
This is all very vague in my head but it seems like there is some path from stochastic information costs of representation to deterministic information costs of representation, along the lines of approximations and limits. People use probabilistic arguments in proofs, for instance, and there's pseudorandom numbers; I imagine you could treat both what Tao is talking about and more traditional information theory problems in the same framework.
Inequalities didn't make sense to me. It's harder to understand than the delta-epsilon from the opening, and it doesn't feel like it translates back to a better understanding.
Convergence is presented as just a pattern. It doesn't have to be economic, but the example naturally suggests convergence, so that's ok.
But continuity and differentiability didn't make sense either. You don't "buy" continuity. There's no (increasing) value attached to smaller intervals, at least not in my understanding of it.
The metaphor in the post says that, if you have a continuous function and you want to restrict its value to within a very small range, you "pay" for that by restricting the value of the independent variable to a suitably small range. That such a "payment" is possible [phrased another way, that this payment will have the effect you want] is what it means for the function to be continuous.
Numerically it makes sense, but I don't have the feeling of cost at all with range restriction. If anything, it should become cheaper.
So, I'm way, way below the Olympic status of Terry Tao, but he might be abstracting a bit too much here. This may not help students understand the topic.
People need different metaphors. So it's often good to present students with a slate of them; if one doesn't work, try another.
It's important not to get stuck on the metaphor, but in practice all but the weirdest of us need them to bootstrap into a mathematical intuition.
This one does absolutely nothing for me either; even casting my mindset back to when I first encountered the episilon-delta treatement I don't think this would have helped me. But if it helps others, that's great.
Also, I think this is a concise treatment. If I were to try to present this to a math class, I'd expand it into at least half a class session, if not a full one. Terry Tao is presenting the metaphor fairly directly, not for pedagogical purposes on this post itself. If 3Blue1Brown took this post and ran with it I'm sure a lot more people would find the result useful at that density of presentation.
Increased precision typically costs more economically, so I think it's a pretty good analogy... precise physical measurements require specialized equipment, precise floating-point calculations require more computational power, etc
> Perhaps readers can propose some other examples of mathematical concepts being re-interpreted as some sort of economic transaction?
On a much more basic level, I plugged in e to formulae throughout my schooling to the age of 18, and only later realised that $e is equal to the amount of interest you'd have on a bank account of $1 if you applied 100% interest continuously compounded.
I love Taos writing and I love a good mental model for abstract math, but he somehow seemed to make inequalities more complicated for me. I think based on talks with others that I have a great ability to imagine 2d and 3d spaces. Did this example help you?
I didn't find these examples particularly illuminating, and I'm also a geometry-forward thinker.
Systems of linear inequalities became transparent to me when I took a class on optimization and learned linear programming from the perspective of polytope geometry.
The basic concept is that you can define a halfspace by a linear inequality of the a^T x <= b. This means that taking the intersection of multiple halfspaces is the same as having multiple linear inequalities active simultaneously, which could be rewritten in matrix form as A x <= b. The intersection of two convex sets is again convex, and a halfspace is obvious convex, so it's clear that A x <= b is a convex polytope (polygon in 2D, polyhedron in 3D).
Systems of nonlinear inequalities are more complicated but you can sometimes approach them similarly.
This style of thinking is much more approachable for me because I have an easy time playing with these kinds of geometric objects in my head.
I too enjoy reading Tao, but this approach to inequalities did not work for me at all. I am a retired PhD mathematician and I've taught the full gamut of undergraduate math in college. BUT all my life I have struggled with the simplest currency conversion arithmetic when I travel overseas. When I saw Tao's first example I said to myself, if this was how I was introduced to inequalities back in school (long ago) I'd likely have been a history major.