The Future of Mathematics Symposium
I attended Day 1 of the Future of Mathematics Symposium at Stanford last Friday May 1. I was unable to attend in person on the second day but watched the Day 2 lectures on YouTube. It was a fun, star-studded event attended by many mathematicians and AI industry people who are interested in where research mathematics is going. There was a lot of excitement about recent advances in AI's ability to do math at levels that only 1-2 years ago (or in some cases even 8-9 months ago) seemed far out of reach. The attendees were a remarkable group - smart, serious, and thoughtful people who love math and are excited about the promise that AI holds for fundamentally transforming mathematics research in (mostly) positive ways.
Main impressions
Lean is huge. The rise of Lean as the 500-pound gorilla of the math formalization ecosystem of tools is not a completely new phenomenon - already 1-2 years ago many people in my circle of colleagues and collaborators were talking quite a lot about Lean. But 2026 will surely be remembered as the year when essentially everyone began talking about Lean and when it became the de facto standard in the formalization business. This exponential rise in Lean's popularity seems due in no small part to the adoption of Lean by several AI companies as a critical piece of infrastructure underlying their efforts to develop large language models performing mathematical reasoning at a very high level; and to the recent development by the same AI companies of some extremely powerful AI tools for autoformalization, such as Aristotle, Gauss, and AxioAxiomProvermProver. These developments were a key topic of discussion, with several of the speakers (Leonardo de Moura, Clark Barrett, Kevin Buzzard and Maryna Viazovska) speaking at length about projects and tools related to Lean and recent milestones reached in both human and AI-assisted formalization.
This was more of an AI conference than a mathematics conference. For an event that was ostensibly about the future of mathematics, most of the speakers were not really focusing on mathematics per se. They were throwing around a lot of math terms - names of mathematical areas, theorems, conjectures, etc - but not saying very much about the mathematical substance behind the jargon. Instead, in most of the talks the main focus of the discussion was about AI, AI tools, and the (genuinely exciting) ways in which these tools are now being used to do actual math research, with the math itself feeling a bit like an afterthought.
To give one small example, in his introductory remarks at the beginning of day 2, Jared Lichtman, the symposium organizer, made a brief reference to "the so-called Erdős 1196 problem" as part of an anecdote about recent AI-enabled research successes, but made no attempt to explain what the problem actually says. (To be clear, this isn't meant as a criticism of Lichtman, who is a very serious mathematician who is quite passionate about his work, or of the other speakers; there was certainly no time to get into math details in a short introductory segment, and the other speakers also surely did not go into the mathematical details of the topics they were discussing not because of a lack of enthusiasm for those details but rather because they already had over-packed agendas to cover more AI-focused themes.)
Nonetheless, an exception to this math-isn't-the-main-thing mindset were the talks by Sébastien Bubeck and Maryna Viazovska, who did in fact spend much of their talks actually explaining the very interesting math they had been working on with their AI and Lean tools. At least from my own point of view this was welcome and made their talks more interesting.A surprising amount of technical detail was discussed. I came in with the expectation that almost all of the talks would have a "fluffy" quality of talking in generalities and making bold, imprecise predictions about the future, but not containing that much in the way of technical substance. There was some fluff, to be sure, but broadly speaking I was pleasantly surprised (and wrong in my initial prediction) at the amount of specific, detailed, bibliographically cited, technical detail, from people who live and breathe AI and math and are happy to share tips from their recent work. (And when I say "recent", in one example mentioned in Sanjeev Arora's talk, he reported on some impressive work he had done literally a couple of days before his talk.)
Brief thoughts on a few of the talks
Leonardo de Moura's talk. An overview of Lean by its creator. Fascinating, deeply technical talk.
Adam Brown's talk. This was an entertaining and well-delivered talk (probably the most "fun" talk of the symposium) giving an overview of the progress the industry has made in getting AI to successfully tackle difficult mathematics problems. Brown's historical overview started in 2019 when according to his description AI was doing math roughly at the level of a preschooler, continued to today's achievements of AI achieving near-superhuman performance at solving International Math Olympiad problems, and made reasoned (and optimistic) extrapolations of how these trends will continue into the future.
Sébastien Bubeck's talk. One of my favorite talks, largely because of the amount of really interesting mathematical detail that as I mentioned earlier Bubeck included in his talk. He related five quite detailed examples of real math research that was done with the help of AI over the last year.
Maryna Viazovska's talk. Another talk I really enjoyed thanks to its heavy math content pertaining to her remarkable work on the sphere packing problems in dimensions 8 and 24. This topic is close to my heart (I was gratified that Viazovska even cited my own 2023 paper making a small contribution to the subject). An excellent speaker covering groundbreaking, inspiring work - what more can one ask for?
Sanjeev Arora's talk. A great talk full of very detailed information about sophisticated harnesses for AI problem-solvers designed by Arora and his collaborators that have enabled them to get large language models to achieve some of the top results in the International Math Olympiad and the First Proof research-level challenge.
Terry Tao's talk. Titled "New mathematical workflows", Tao's talk focused on exploring what the future holds in store for mathematicians in an age of (to use his words) "proof abundance" where proving things becomes much easier thanks to powerful AI, but other goals that mathematicians hold dear are not necessarily easier to achieve. In such a world all sorts of systems that form part of the collective infrastructure of mathematical research (journals, refereeing, the dissemination of research through talks) will be thrown out of balance, and Tao gave a thoughful discussion of how mathematicians will need to adapt to such a new reality.
Sergei Gukov's talk. Another talk I enjoyed, focusing on a particular flavor of AI and math research that employs sophisticated Reinforcement Learning algorithms in pursuit of advances in our knowledge on extremely difficult math problems. He gave an interesting history of Reinforcement Learning algorithms, explaining that many of the basic algorithms of the field were initially developed to teach computers to play Atari video games at a high level. It's inspiring to think that some of the (essentially) same algorithms now power AlphaFold, self-driving cars, and the sorts of AI pipelines Gukov sets up to attack difficult math questions. I also like that the sort of research he is doing and championing is using hard-core machine learning ideas, which are somewhat orthogonal to LLM-based approaches taken by many other researchers these days.
Summary
Noteworthy quote: "We haven't yet seen major advances brought about by AI, but it is coming, very plausibly this year" (Sébastien Bubeck)
Favorite quote: "mathematics is a hypergraph" (Mike Freedman)
The event is fully watchable on YouTube. I recommend checking it out; even in this very fast-moving industry, I expect these talks will continue to be relevant for at least the next 3-4 months.

