Q&A: Ken Ono feared AI. Now he trains it.
Ken Ono (Photo by Matt Riley, UVA.)
“Am I doing a better service to science by writing 20 more papers that only people in my field will read or by participating in this movement?” Ken Ono asked himself last year. At the time, Ono was an endowed chair at the University of Virginia. In the following months, he’d take a leave of absence from the university, move to Silicon Valley, and become the founding mathematician at an AI startup, Axiom Math.
Axiom is one of several companies focused on developing AI tools to verify mathematical proofs. In the future, similar tools could be applied to check AI-generated computer code, a potentially lucrative service now that AI tools can write lines of code faster than any human can debug them. Startups Harmonic and Math Inc as well as Google’s DeepMind are targeting similar AI-powered verification tools.
Axiom has attracted investor attention, reaching a valuation of $1.6 billion earlier this year. Ono says that he has no equity in the company and that he receives a salary that is similar to his former compensation at Virginia. His interests, he says, are in helping researchers use AI to think in more-creative ways and across more disciplines.
The following interview has been lightly edited for length and clarity.
Let’s rewind to last year. What were you doing at the University of Virginia?
I have been your prototypical math researcher for around 30 years. My work as a number theorist has been useful for classical number theory and applied problems. Perhaps the most notable for the Physics Today community is that I was one of the main authors in a paper that proved the umbral moonshine conjecture
You were invited to a May 2025 conference in Berkeley, California, for FrontierMath, a project supported by the nonprofit research institute Epoch AI. What did you think going in, and what did you learn there?
To be honest with you, I was very skeptical that AI could be useful for research in mathematics. Earlier versions of ChatGPT would say 9.11 was larger than 9.8. Then, last year, these large language models [LLMs] were close to performing at the gold medal–level on International Mathematical Olympiad problems. Many mathematicians took notice and figured we should engage. Our goal at the FrontierMath conference was to assemble a list of 50 very challenging research-level math problems to test AI models.
At that meeting, I discovered that ChatGPT o4-mini had remarkable access to the accumulated knowledge of the fields that I knew well. I could ask it a research-level problem, and ChatGPT might not get the right answer, but it said the right things and it knew the right papers. If I were talking to a beginning graduate student, it would take three or four months, maybe a year, for our conversation to get to that level. And here, the computers could just do it.
How did you feel about that?
I was depressed last June and July. I was thinking, What is my future in science when my bread and butter seems accessible with just a few keystrokes? The model had read all my papers. I felt like I’d spent 30 years of my life writing them. And let’s face it, the next paper I write is largely a function of what I’ve thought about for the last 30 years.
I took some comfort in the fact that to nail a theorem to finish a paper, there’s a last 3–5% of effort where you really need to bang your head against the wall to either find an elegant new argument or carry out a difficult calculation that an LLM can’t look up. But I was thinking, Has my career come down to finding problems where that’s all I could do?
You have now joined the AI industry. What changed?
At the end of the day, the crafting of solutions in mathematics is 90% of our effort cognitively. Maybe that’s only one-third of the research paper. The other two-thirds is regurgitating or framing what’s already been known to the community. AI can significantly lighten the load in terms of writing lemmas, which are smaller theorems used to prove a bigger theorem, or making sure I don’t miss out on a reference in the literature. Even though LLMs are well known to hallucinate, I can check their work.
When I finally understood that, I recognized that this is not the end of mathematics. The computer should not be replacing us as scientists, but maybe it’s replacing a large part of the heavy lift that was routine to us anyway. When you get to that conclusion, it gives you not only a sign of hope, but it gives you a new mission, a new purpose. That’s why I decided to consider roles in Silicon Valley. I had a number of offers, and I ended up in Palo Alto working with one of my former students, Carina Hong.
How is Axiom working on the verification problem?
We are developing tools that can take an informal proof, formalize it in a computer language called Lean, and check if the proof is correct. I’m not advocating that humans be removed from the adjudication process, but I want this to be a tool that gives absolute confidence that a mathematical statement is or is not correct.
Give us an example of the type of work you do as the founding mathematician.
I’m not an engineer. I can’t write the code. But I’ve been learning quite a bit of what goes on under the hood. Last week, I wrote three internal papers for our engineers. I spend a lot of time thinking about how successful practices in mathematics might translate to executable code. I help engineers understand how mathematicians perform their craft.
In what ways can researchers use AI? And what role does AI play in that collaboration?
There are at least three different kinds of AI that we should talk about. First, LLMs today are essentially the most awesome librarians you could ever face. They have access to the accumulation of all human knowledge. They’re stochastic parrots. But do you want your librarian to be your neurosurgeon? Although they are very reliable for a large percentage of queries, they make mistakes.
The computer should not be replacing us as scientists, but maybe it’s replacing a large part of the heavy lift that was routine to us anyway.
The second kind of AI is verification, and that’s a large part of what we’re doing at Axiom. How does one know that an informal scientific argument is correct? I like to think of Lean and the formalization part as the math teacher.
The third part of AI is, How do you deploy state-of-the-art machine learning and reasoning and advances in AI to discover or find glimpses of patterns that humans just cannot spot? This is perhaps where AI has hit its home runs in science and academia. An example is Google DeepMind’s AlphaFold and its advances in protein folding. In March, we released a tool called Axplorer, for scientists who are studying mathematical datasets and may be wondering, Maybe there’s a needle in the haystack, the glimpse of something, but how do I look for it?
How might these different types of AI apply to the physical sciences?
LLMs could be helpful for every physicist. Verification tools could be useful for folks working on general relativity and in black hole physics who want to ensure that the mathematical foundation of their work is airtight. And discovery AI tools could be indispensable for sifting through petabytes of collision data to identity an important bump in a plot, or for materials scientists predicting new material structures.
Will using AI erode scientists’ skills?
If you say, Here’s a question, here’s the answer, and that’s the end of the story? That’s a huge erosion of skills. We’ve got scientists using AI to publish three times the number of papers
In my years in the provost office, one of the things I wanted to do was support interdisciplinary science. How do I get scientists to break out of their silos? One night recently, I was having a hard time going to sleep, and I started engaging with an LLM: “Tell me about who’s at Stanford who uses mathematics and neuroscience? Does it relate to anything I do?”
An hour later, I’m reading about new kinds of brain cells that people have discovered that help us navigate. These brain cells are arranged in a virtual hexagonal lattice. I have thought deeply about lattices as a number theorist, never thinking that the same features that excite me as a mathematician are somehow embedded in the brain. I started reading Scientific American articles, then PNAS articles, and now I’m up to the textbooks in that field. It is exciting to know that in science, we don’t have to be siloed.
You mentioned earlier about discovering that ChatGPT had read all your papers. How do you feel about companies feeding your research into models without your permission?
I freely share my research on arXiv, and so I consider it a contribution to the global knowledge commons. I don’t mind when it is used in training. However, my commercial books represent a distinct category of intellectual property where the rules of fair compensation must apply. One of my books is officially part of the $1.5 billion Anthropic settlement, which establishes a vital precedent for remunerating creators. It’s a step toward a future where AI progress respects the financial and legal rights of authors.
You’re a member of the Mathematical Sciences Education Board at the US National Academies. AI is quickly shifting the educational landscape. How can education keep up?
I don’t think any of us agree that the current models of education make sense. They are built around traditions that depend on a certification and grades. We live at a time when knowledge has become cheap. The fact that knowledge has become cheap means that we really need to rethink education. What are the tools that we want? I don’t accept that manual trades like plumbing are the only safe bets. I think that’s cheap. Maybe there is a lot of truth to that. But I think that’s giving up on our young people. We still need humans to set the dials and adjudicate in science, law, and policy and any role where human judgment and accountability remain irreplaceable.
If you’ve had children, you know your daughters and sons would sing out loud, even if they had no talent, because they love music. Where did we lose that? What AI is giving us, when implemented and deployed properly, is a tool for us to expand our creative horizons.
We need a call to action, not just to physics or math professors. This is a call to action to university presidents and provosts. Instead of trying to preserve the current system, act now and prepare for the future, because our young people demand it. Rethink your curriculum so that what is truly human is what we are training.