The cultural divide between mathematics and AI

sugaku.net

299 points by rfurmani 11 days ago


nicf - 11 days ago

I'm a former research mathematician who worked for a little while in AI research, and this article matched up very well with my own experience with this particular cultural divide. Since I've spent a lot more time in the math world than the AI world, it's very natural for me to see this divide from the mathematicians' perspective, and I definitely agree that a lot of the people I've talked to on the other side of this divide don't seem to quite get what it is that mathematicians want from math: that the primary aim isn't really to find out whether a result is true but why it's true.

To be honest, it's hard for me not to get kind of emotional about this. Obviously I don't know what's going to happen, but I can imagine a future where some future model is better at proving theorems than any human mathematician, like the situation, say, chess has been in for some time now. In that future, I would still care a lot about learning why theorems are true --- the process of answering those questions is one of the things I find the most beautiful and fulfilling in the world --- and it makes me really sad to hear people talk about math being "solved", as though all we're doing is checking theorems off of a to-do list. I often find the conversation pretty demoralizing, especially because I think a lot of the people I have it with would probably really enjoy the thing mathematics actually is much more than the thing they seem to think it is.

NooneAtAll3 - 11 days ago

I feel like this rumbling can be summarized as "Ai is engineering, not math" - and suddenly a lot of things make sense

Why Ai field is so secretive? Because it's all trade secrets - and maybe soon to become patents. You don't give away precisely how semiconductor fabs work, only base research level of "this direction is promising"

Why everyone is pushed to add Ai in? Because that's where the money is, that's where the product is.

Why Ai needs results fast? Because it's production line, and you create and design stuff

Even the core distinction mentioned - that Ai is about "speculation and possibility" - that's all about tool experimenting and prototyping. It's all about building and constructing. Aka Engineering/Technology letters of STEM

I guess next step is to ask "what to do next?". IMO, math and Ai fields should realise the divide and slowly diverge, leaving each other alone on an arm's length. Just as engineers and programmers (not computer scientists) already do

bwfan123 - 10 days ago

I had an aha moment recently. An excited AI researcher claimed, wow: claude could solve this IMO problem. Then, a mathematician pointed out a flaw which the AI researcher overlooked. The AI researcher then prompted the AI with the error and then the AI produced another proof he thought worked, but again was flawed. The AI played on the researcher's naivete.

Long story short, current AI is doing cargo-cult math - ie, going through the motions with mimicry. Experts can see through it, but excited AI hypesters are blind, and lap it up. Even alpha-geometry (with built-in theorem prover) is largely doing brute-force search of a limited axiomatized domain. This is not to say AI is not useful, just that the hype exceeds the actual.

kkylin - 11 days ago

As Feynman once said [0]: "Physics is like sex. Sure, it may give some practical results, but that's not why we do it." I don't think it's any different for mathematics, programming, a lot of engineering, etc.

I can see a day might come when we (research mathematicians, math professors, etc) might not exist as a profession anymore, but there will continue to be mathematicians. What we'll do to make a living when that day comes, I have no idea. I suspect many others will also have to figure that out soon.

[0] I've seen this attributed to the Character of Physical Law but haven't confirmed it

golol - 11 days ago

Nice article. I didn't read every section in detail but I think it makes a good point that AI researchers maybe focus too much on the thought of creating new mathematics while being able to repdroduce, index or formalize existing mathematics is really they key goal imo. This will then also lead to new mathematics. I think the more you advance in mathematical maturity the bigger the "brush" becomes with which you make your strokes. As an undergrad a stroke can be a single argument in a proof, or a simple Lemma. As a professor it can be a good guess for a well-posedness strategy for a PDE. I think AI will help humans find new mathematics with much bigger brush strokes. If you need to generalize a specific inequality on the whole space to Lipschitz domains, perhaps AI will give you a dozen pages, perhaps even of formalized Lean, in a single stroke. If you are a scientist and consider an ODE model, perhaps AI can give you formally verified error and convergence bounds using your specific constants. You switch to a probabilistic setting? Do not worry. All of these are examples of not very deep but tedious and non-trivial mathematical busywork that can take days or weeks. The mathematical ability necessary to do this has in my opinion already been demonstrated by o3 in rare cases. It can not piece things together yet though. But GPT-4 could not piece together proofs to undergrad homework problems while o3 now can. So I believe improvement is quite possible.

mistrial9 - 11 days ago

> Throughout the conference, I noticed a subtle pressure on presenters to incorporate AI themes into their talks, regardless of relevance.

This is well-studied and not unique to AI, the USA in English, or even Western traditions. Here is what I mean: a book called Diffusion of Innovations by Rogers explains a history of technology introduction.. if the results are tallied in population, money or other prosperity, the civilizations and their language groups that have systematic ways to explore and apply new technology are "winners" in the global context.

AI is a powerful lever. The meta-conversation here might be around concepts of cancer, imbalance and chairs on the deck of the Titanic.. but this is getting off-topic for maths.

woah - 11 days ago

> Perhaps most telling was the sadness expressed by several mathematicians regarding the increasing secrecy in AI research. Mathematics has long prided itself on openness and transparency, with results freely shared and discussed. The closing off of research at major AI labs—and the inability of collaborating mathematicians to discuss their work—represents a significant cultural clash with mathematical traditions. This tension recalls Michael Atiyah's warning against secrecy in research: "Mathematics thrives on openness; secrecy is anathema to its progress" (Atiyah, 1984).

Engineering has always involved large amounts of both math and secrecy, what's different now?

meroes - 11 days ago

My take is a bit different. I only have a math undergrad and only worked as an AI trainer so I’m quite “low” on the totem pole.

I have listened to colin Mclarty talk about philosophy of math and there was a contingent of mathematicians who solely cared about solving problems via “algorithms”. The time period was just preceding the modern math since the late 1800s roughly, where the algorithmists, intuitivists, and logical oriented mathematicians coalesced into a combination that includes intuitive, algorithmic, and importance of logic, leading to the modern way we do proofs and focus on proofs.

These algorithmists didn’t care about the so called “meaningless” operations that got an answer, they just cared they got useful results.

I think the article mitigates this side of math, and is the side AI will be best or most useful at. Having read AI proofs, they are terrible in my opinion. But if AI can prove something useful even if the proof is grossly unappealing to the modern mathematician, there should be nothing to clamor about.

This is the talk I have in mind https://m.youtube.com/watch?v=-r-qNE0L-yI&pp=ygUlQ29saW4gbWN...

xg15 - 11 days ago

> One question generated particular concern: what would happen if an AI system produced a proof of a major conjecture like the Riemann Hypothesis, but the proof was too complex for humans to understand? Would such a result be satisfying? Would it advance mathematical understanding? The consensus seemed to be that while such a proof might technically resolve the conjecture, it would fail to deliver the deeper understanding that mathematicians truly seek.

I think this is an interesting question. In a hypothetical SciFi world where we somehow provably know that AI is infallible and the results are always correct, you could imagine mathematicians grudgingly accepting some conjecture as "proven by AI" even without understanding the why.

But for real-world AI, we know it can produce hallucinations and its reasoning chains can have massive logical errors. So if it came up with a proof that no one understands, how would we even be able to verify that the proof is indeed correct and not just gibberish?

Or more generally, how do you verify a proof that you don't understand?

calibas - 10 days ago

> While mathematicians traditionally pursue understanding for its own sake, industry researchers must ultimately deliver products, features, or capabilities that create value for their organizations.

This really isn't about mathematics or AI, this is about the gap between academia and business. The academic wants to pursue knowledge for the sake of knowledge, while a business wants to make money.

Compare to computer science or engineering, where business has near completely pervaded the fields. I've never heard anybody lamenting their inability to "pursue understanding for its own sake" and when someone does advance the theory, there's also a conversation about how to make it profitable. The academic aspect isn't gone, but it's found a way to coexist with the business aspect, for better or worse.

Honestly it sounds like mathematicians have had things pretty good if this is one of their biggest complaints.

umutisik - 11 days ago

If AI can prove major theorems, it will likely by employing similar heuristics as the mathematical community employs when searching for proofs and understanding. Studying AI-generated proofs, with the help of AI to decipher contents will help humans build that 'understanding' if that is desired.

An issue in these discussions is that mathematics is both an art, a sport, and a science. And the development of AI that can build 'useful' libraries of proven theorems means different things for each. The sport of mathematics will be basically over. The art of mathematics will thrive as it becomes easier to explore the mathematical world. For the science of mathematics, it's hard to say, it's been kind of shaky for ~50 years anyway, but it can only help.

mcguire - 11 days ago

Fundamentally, mathematics is about understanding why something is true or false.

Modern AI is about "well, it looks like it works, so we're golden".

lmpdev - 11 days ago

I did a fair bit of applied mathematics at uni

What I think Mathematicians should remind themselves is a lot of prestigious mathematicians, the likes of Cantor or Erdos, often only employed a handful of “tricks”/heuristics for their proofs over their career. They repeatedly and successfully applied these strategies into unsolved problems

I argue would not take a tremendous jump in performance for an AI to begin their own journey similar in kind to the greats, the only thing standing in their way (as with all contemporary mathematicians) is the extreme specialisation required to reach the boundary of unsolved problems

AI need not be Euler to be an important tool and figure within mathematics

FilosofumRex - 10 days ago

I find this cultural divide exists predominantly among mathematicians who consider existence proofs as real mathematics.

Mathematicians who practice constructive math and view existence proofs as mere intellectual artifacts tend to embrace AI, physics, engineering and even automated provers as worthy subjects.

tylerneylon - 11 days ago

I agree with the overt message of the post — AI-first folks tend to think about getting things working, whereas math-first people enjoy deeply understood theory. But I also think there's something missing.

In math, there's an urban legend that the first Greek who proved sqrt(2) is irrational (sometimes credited to Hippasus of Metapontum) was thrown overboard to drown at sea for his discovery. This is almost certainly false, but it does capture the spirit of a mission in pure math. The unspoken dream is this:

~ "Every beautiful question will one day have a beautiful answer."

At the same time, ever since the pure and abstract nature of Euclid's Elements, mathematics has gradually become a more diverse culture. We've accepted more and more kinds of "numbers:" negative, irrational, transcendental, complex, surreal, hyperreal, and beyond those into group theory and category theory. Math was once focused on measurement of shapes or distances, and went beyond that into things like graph theory and probabilities and algorithms.

In each of these evolutions, people are implicitly asking the question:

"What is math?"

Imagine the work of introducing the sqrt() symbol into ancient mathematics. It's strange because you're defining a symbol as answering a previously hard question (what x has x^2=something?). The same might be said of integration as the opposite of a derivative, or of sine defined in terms of geometric questions. Over and over again, new methods become part of the canon by proving to be both useful, and in having properties beyond their definition.

AI may one day fall into this broader scope of math (or may already be there, depending on your view). If an LLM can give you a verified but unreadable proof of a conjecture, it's still true. If it can give you a crazy counterexample, it's still false. I'm not saying math should change, but that there's already a nature of change and diversity within what math is, and that AI seems likely to feel like a branch of this in the future; or a close cousin the way computer science already is.

m0llusk - 11 days ago

> The last mathematicians considered to have a comprehensive view of the field were Hilbert and Poincaré, over a century ago.

Henri Cartan of the Bourbaki had not only a more comprehensive view, but a greater scope of the potential of mathematical modeling and description

FabHK - 10 days ago

> One striking feature of mathematical culture that came up was the norm of alphabetical authorship. […] There are some exceptions, like Adleman insisting on being last in the RSA paper.

lol, took me a second to get the plausible reason for that

krnsll - 10 days ago

As a mathematician, I can't help but simmer each time I find the profession's insistence on grasping the how's and why's of matters to be dismissed as pedantry. Actionable results are important but absent understanding, we will never have any grasp on downstream impact of such progress.

I fear AI is just going to lower our general epistemic standards as a society, and we forget essential truth verifying techniques in the technical (and other) realms all together. Needless to say the impact this has on our society's ethical and effectively legal foundations, because ultimately without clarity on how's and why's it will be near impossible to justly assign damages.

wanderingmind - 11 days ago

Terence Tao recently gave a lecture on Machine Assisted Proofs that helped even common folk like me to understand on the upcoming massive changes to Math within the next decade. Especially, its fascinating to see how AI and especially Lean might provide an avenue for large scale collaboration in Math Research, to bring them on par with how research is done in other sciences

https://www.youtube.com/watch?v=5ZIIGLiQWNM

trostaft - 10 days ago

> Unlike many scientific fields, mathematics has no concept of "first author" or "senior author"; contributors are simply listed alphabetically.

I don't think this is (generally) true? Speaking as a math postdoc right now, at least in my field of computational mathematics there's definitely a notion of first author. Though, a note of individual contributions at the bottom of the paper is becoming more common.

lairv - 11 days ago

> A revealing anecdote shared at one panel highlighted the cultural divide: when AI systems reproduced known mathematical results, mathematicians were excited, while AI researchers were disappointed

This seems very caricatural, one thing I've often heard in the AI community is that it'd be interesting to train models with an old data cutoff date (say 1900) and see whether the model is able to reinvent modern science

BrenBarn - 10 days ago

I think a lot of this is not so much "math vs. AI" as "anyone who cares about anything other than making as much money as possible vs. anyone who only cares about making as much money as possible".

EigenLord - 11 days ago

Is it really a culture divide or is it an economic incentives divide? Many AI researchers are mathematicians. Any theoretical AI research paper will typically be filled with eye-wateringly dense math. AI dissolves into math the closer you inspect it. It's math all the way down. What differs are the incentives. Math rewards openness because there's no real concept of a "competitive edge", you're incentivized to freely publish and share your results as that is how you get recognition and hopefully a chance to climb the academic ladder. (Maybe there might be a competitive spirit between individual mathematicians working on the same problems, but this is different than systemic market competition.) AI is split between being a scientific and capitalist pursuit; sharing advances can mean the difference between making a fortune or being outmaneuvered by competitors. It contaminates the motives. This is where the AI researcher's typical desire for "novel results" comes from as well, they are inheriting the values of industry to produce economic innovations. It's a tidier explanation to tie the culture differences to material motive.

Sniffnoy - 11 days ago

> As Gauss famously said, there is "no royal road" to mathematical mastery.

This is not the point, but the saying "there is no royal road to geometry" is far older than Gauss! It goes back at least to Proclus, who attributes it to Euclid.

SwtCyber - 10 days ago

If AI-generated proofs become incomprehensible to humans, do they still count as -math- in the traditional sense?

j2kun - 11 days ago

This is written in the first person, but there is no listed author and the website does not suggest an author...

nothrowaways - 11 days ago

You can't fake influence

weitendorf - 10 days ago

If you look closely at the history of mathematics you can see that it worked similarly to current AI in many respects (not so much the secrecy) - people were oftentimes just concerned with whether something worked rather than why it worked (eg so that they could build a building or compute something), and the full theoretical understanding of something sometimes came significantly later than the knowledge of whether something was true or useful.

In fact, the modern practice (the concept predates the practice of course, but was more of an opinion than a ritual) of mathematics as this ultimate understandable system of truth and elegance seemingly began in Ancient Greece with their practice of proofs and early development of mathematical "frameworks". It didn't reach its current level of rigor and sophistication until 100-150 years ago when Formalism became the dominant school of thought (https://en.wikipedia.org/wiki/Formalism_(philosophy_of_mathe...), spearheaded by a group of mathematicians who held even deeper beliefs that are often referred to as Mathematical Platonism (https://en.wikipedia.org/wiki/Mathematical_Platonism). (Note that these wikipedia articles are not amazing explanations of the concepts, how they relate to realism, or developed historically but they are adequate primers)

Of course, Godel proved that truths exists outside of these formal systems (only a couple decades after mathemticians had started building a secret religion around worshipping Logos. These beliefs were pervasive see eg Einsteins concept of God as a clockmaker or Erdos' references to "The Book"), which leaves us almost back where we started where we might need to consider there may be some empirical results and patterns which "work" but we do not fully understand - we may never understand them. Personally, I think this philosophically justifies not subjecting oneself to the burden of spending excess time understanding or proving things that have never been understood before - it may elude elegance (as the 4-color proof) or even knowability.

We can always look backwards and explain things later, and of course, it's a false dichotomy that some theorems or results must be fully understood and proven (or proven elegantly) before they can be considered true and used as a basis for further results. Perhaps it is unsatisfying to those who wish to truly understand the universe in terms of mathematical elegance, but that asshole used mathematical elegance to disprove mathematical elegance as a perfect tool for understanding the universe already, so take it up with him.

Personally, as someone who at one time heavily considered pursuing a life in mathematics in part because of its ability to answer deep truths, I think Godel set us free: to understand or know things, we cannot rely solely on mathematics. Formal mathematics itself tells us that there are things we can only understand by discovering them, building them, or experimenting with them. There are truths that Cuda Cowboys can uncover that LaTex Liturgy cannot

throw8404948k - 11 days ago

> This quest for deep understanding also explains a common experience for mathematics graduate students: asking an advisor a question, only to be told, "Read these books and come back in a few months."

With AI advisor I do not have this problem. It explains parts I need, in a way I understand. If I study some complicated topic, AI shortens it from months to days.

I was somehow mathematically gifted when younger, sadly I often reinvented my own math, because I did not even know this part of math existed. Watching how Deepseek thinks before answering, is REALLY beneficial. It gives me many hints and references. Human teachers are like black boxes while teaching.

tech_ken - 11 days ago

Mathematics is, IMO, not the axioms, proofs, or theorems. It's the human process of organizing these things into conceptual taxonomies that appeal to what is ultimately an aesthetic sensibility (what "makes sense"), updating those taxonomies as human understanding and aesthetic preferences evolve, as well as practical considerations ('application'). Generating proofs of a statement is like a biologist identifying a new species, critical but also just the start of the work. It's the macropatterns connecting the organisms that lead to the really important science, not just the individual units of study alone.

And it's not that AI can't contribute to this effort. I can certainly see how a chatbot research partner could be super valuable for lit review, brainstorming, and even 'talking things through' (much like mathematicians get value from talking aloud). This doesn't even touch on the ability to generate potentially valid proofs, which I do think has a lot of merit. But the idea that we could totally outsource the work to a generative model seems impossible by definition. The point of the labor is develop human understanding, removing the human from the loop changes the nature of the endeavor entirely (basically to algorithm design).

Similar stuff holds about art (at a high level, and glossing over 'craft art'); IMO art is an expressive endeavor. One person communicating a hard-to-express feeling to an audience. GenAI can obviously create really cool pictures, and this can be grist for art, but without some kind of mind-to-mind connection and empathy the picture is ultimately just an artifact. The human context is what turns the artifact into art.

esafak - 11 days ago

AI is young, and at the center of the industry spotlight, so it attracts a lot of people who are not in it to understand anything. It's like when the whole world got on the Internet, and the culture suddenly shifted. It's a good thing; you just have to dress up your work in the right language, and you can get funding, like when Richard Bellman coined the term "dynamic programming" to make it palatable to the Secretary of Defense, Charles Wilson.

hjhack - 10 days ago

[dead]

invalidOrTaken - 11 days ago

[flagged]

randomNumber7 - 10 days ago

Imo mathematicians want to be very smart, when a lot of ai is actually easy to undertand with good abstract and logic thinking and linear algebra.