Exclusive Interview with Terence Tao by Quantum位: Why I Am Launching an AI x Science Organization Now

Mathematician Terence Tao has announced his new role in AI—Co-founder of the SAIR Foundation.

Previously, he was a world-renowned mathematical prodigy, a legendary mathematician who achieved fame at a young age, becoming the youngest gold medalist in IMO history at 13... By 24, he had become the youngest tenured full professor in the history of the University of California, Los Angeles (UCLA).

In recent years, with the rise of ChatGPT, he has become a flagship figure in AI × Mathematics, increasingly thinking and speaking about the possibilities of the intersection between AI and basic sciences.

Now, just as 2026 begins, 50-year-old Terence Tao has taken a further step. As a co-founder, he has initiated the SAIR Foundation. He hopes this non-profit alliance, aimed at reshaping the relationship between AI and science, can connect academia and industry, and unite and help more young scientists advance two major goals:

First, to build AI using scientific methods; Second, to reshape basic scientific research with the help of AI.

Main expert members of the SAIR Foundation
Main expert members of the SAIR Foundation

Following the official announcement of the SAIR Foundation, Terence Tao and Chuck NG, the two co-founders of SAIR, discussed everything related to AI x Science, mathematics, and basic research in an exclusive interview with Quantum位.

In their view, the most exciting aspect of AI x Science lies in the democratization of research. They hope that through the bridge of SAIR, they can open the doors of the ivory tower for more young people.

In the future, there could be 10,000 Terence Taos in the world.

Illustration

The above is just the tip of the iceberg. In this in-depth conversation lasting over 90 minutes, you will also see the following brilliant viewpoints:

  • If AI can express confidence levels when answering, such as "I am quite sure" or "I am not very certain here," its actual usability will be significantly improved.

  • The model where academia and industry perform their respective duties separately does not work; the speed is too slow. The AI era requires close cooperation between the two.

  • Compared to finance and healthcare, science is a safer testing ground for AI. Relatively speaking, miscalculating a math problem results in almost no loss.

  • Seemingly repetitive and tedious basic work is actually very important for personal growth; young people need these valuable training opportunities.

  • Most disciplines are communicating with each other, and AI is an important catalyst facilitating this interdisciplinary interaction.

  • Do not simply ban new technologies; the task of universities is to teach students how to use them correctly.

  • We need to solve structural bottlenecks in the research system and accelerate the evolution of Artificial General Intelligence (AGI) and Artificial Superintelligence (ASI) in a scientific and safe manner through interdisciplinary global collaboration.

Below is the complete transcript of the interview, carefully proofread, exceeding 15,000 words. To improve readability, Quantum位 has made appropriate arrangements and reductions to the content without changing the original meaning.

Please enjoy.

AI x Science Needs Its Own Vertical AI

Quantum位: First, congratulations on the establishment of the SAIR Foundation. Can you talk to us about the motivation behind launching this AI x Science institution?

Terence Tao: I believe AI will fundamentally change the model of scientific research, and the core question we must first clarify is: How can we reasonably and efficiently utilize AI in scientific research scenarios?

In fact, we need some high-quality pilot projects to demonstrate what best practices look like, allowing other scientists to reference and learn from them.

In the past, such work was mainly driven by universities, research institutions, and government departments. However, in the current environment, support from other fields is equally important. It is more flexible and can help us attempt some innovative things.

I am very happy to participate in founding this institution, hoping to explore new ideas through it, try bolder paths, and see how far we can go when AI and science are combined in a more prudent manner.

Chuck: I have always enjoyed collaborating with excellent scientists, and I am truly excited to launch this organization with Terry (Terence Tao). At the same time, multiple Nobel Prize and Turing Award winners have joined us.

Terry spoke more about the academic side just now, while I have long promoted the integration of academia and industry, which is one of the reasons we are particularly passionate about this project.

If you look at our launch event, you will find that the participants gathered top academic researchers from around the world and representatives from several technology companies, including NVIDIA, OpenAI, Amazon, Microsoft, etc. All parties exchanged views on the development of AI x Science, laying the foundation for subsequent cross-domain collaboration.

When academia and industry sit together, it brings many opportunities; there is so much that can be done.

Quantum位: From your perspective, what are the main shortcomings of current AI technology? Why can't the research field directly use models from OpenAI or other companies?

Terence Tao: We are actually already trying to use some mainstream LLMs, and indeed some researchers have achieved results with them.

The problem is that models produce hallucinations, which is a very serious issue for scientific research. Research requires verifiable and trustworthy systems.

Another challenge is interpretability. A model might sometimes give an idea that looks good, but it often won't explain whether this idea comes from existing literature in the training data or some new combination, nor can it clarify its relationship with existing work.

Science is not just about solving isolated problems one by one; more importantly, it is about putting new results into the existing knowledge system so that successors can continue to advance based on them. This requires results to have traceability, standardized citations, and clear explanations of how to extend or modify them.

Commercial large models can sometimes achieve these things, but not stably. If we could have AI specifically designed for scientific research, or enforce verification through better workflows, and systematically connect results with the literature system, the help to science would be much greater.

The ultimate likely direction is embedding existing models into a stricter framework, 配合 powerful verification and validation mechanisms, making them true tools for scientific discovery.

Chuck: In fact, at a daily level, such as writing, AI performance is already quite good.

But once you enter deeper, more professional technical fields, the situation is completely different. In many specialized scientific fields, high-quality, structured data itself is very limited. This is also why close cooperation with scientists is essential.

Our goal is to polish these systems until they can be reliably used for research. Ultimately, we hope to make advanced AI accessible to the vast majority of people, which is "AI Democratization".

Terence Tao: Let me give a very simple example.

When scientists propose a conclusion, they usually state their level of confidence in that conclusion at the same time, such as "I am very confident about this," "I have some confidence," or "This idea is not yet mature."

AI does not do this; they almost always give answers in a tone of one hundred percent certainty. If AI could explicitly express different levels of confidence, its practicality in research would improve significantly.

Quantum位: Currently, the main thread of the entire industry is talking about Scaling—more data, larger models, stronger computing power. But SAIR is more concerned with "Scaling the Science of AI". What does this specifically refer to?

Terence Tao: So far, the route adopted by technology companies has indeed been very successful. When computing power and training data increase by an order of magnitude, model capabilities show a significant leap. This method has worked well up to now.

But in the long run, it will hit a wall. Data is not infinite; the public internet has basically been used up, and there are also constraints in terms of energy and computing power.

Additionally, current AI can solve very difficult problems, but often very inefficiently. A human mathematician might grasp the core of a problem by looking at ten examples and then draw inferences from one instance; existing AI, however, often needs millions of training samples, repeated attempts, and even hundreds of runs to get a correct result.

In the context of research, we don't always need the largest, most general models. Many research tasks are highly specialized. In some scenarios, smaller, lower-power, and lower-cost models that can even run directly on personal computers are sufficient.

Large companies focus more on building general models that can "do everything"; whereas research scenarios may need specialized tools tailored for specific workflows. Developing and supporting such tools is exactly what we hope to promote through SAIR.

Quantum位: Can I understand it this way: in the direction of AI x Science, what is truly key is better principles and methodologies, rather than blindly making models larger?

Terence Tao: You can understand it that way. We need better ways to assess credibility, express confidence levels, and also need to improve the interpretability of systems.

We also need to improve the way humans collaborate with AI. The most common interaction mode now is that you give the model a prompt, and it directly gives a complete answer.

But in many research scenarios, researchers often care not only about the final conclusion but also want to see the reasoning process itself. You might want to intervene midway, supplement new information, or explore different paths.

Currently, many researchers remain观望 (watchful) regarding the application of AI. On one hand, this is because they have personally experienced system errors; on the other hand, existing tools do not match their core research needs.

If scientists can develop tools that truly fit their own workflows and research needs, I believe the usage rate of these systems will increase significantly.

Chuck: I would like to add from another angle related to credibility: "data quality."

One of our close partners, John Hennessy, has been providing advice to the SAIR Foundation. He is a Turing Award winner and also the Chairman of Alphabet. He often emphasizes that in research, the importance of improving data quality is no less than improving the model itself.

Trust itself is also a broader social issue. In different regions, people's level of trust in data and technology varies. The level of trust in American society for something is around 70% to 80%, while trust in AIGC is often only about half of that figure.

This gap also explains why many organizations, including OpenAI, xAI, and other AI companies, hope to cooperate with us. Trust, reliability, and scientific rigor are crucial.

Quantum位: As AI continues to lower the threshold for research, what changes will it bring to the entire industry and the global research landscape?

Chuck: This is a very good question. I think the ultimate goal is, through cooperation with top scientists and researchers, to elevate AI to a level of being "default trustworthy".

Once AI reaches such a level, it won't just be used by experts. Ordinary people will also be able to use it with confidence. For example, your parents or even your grandparents could rely on AI in their daily lives without worrying about whether it is reliable or not.

This is exactly why we want to bring together first-class scientists and industry partners to learn from each other and advance together. Only through such collaboration can the technology itself and its application in real research scenarios move forward together.

We hope AI can become a daily tool, just like cars. Only when AI reaches this level of reliability will the global research landscape truly change.

Quantum位: How specifically will SAIR participate in and promote this transformation?

Chuck: Our approach is to bring academia and industry together in a more direct and organized way.

On the academic side, many researchers lack computing power and find it difficult to obtain long-term, stable funding; on the industry side, companies possess computing power, capital, and engineering capabilities, but there is still a obvious mismatch between existing models/tools and research needs.

The research field increasingly needs the participation of broader social forces, including donors, foundations, investors, and entrepreneurs. Bringing these people together can better support those research directions that are truly long-term and have high impact.

We believe this collaboration model can push the boundaries of science further.

Terence Tao: In the past few decades, a mainstream model has been: academia mainly relies on official funding, while industry is responsible for transforming research results into applications. Academic researchers propose fundamental ideas, and industry or other entities turn these ideas into intellectual property, patents, and commercial products.

This chain works, but the speed is relatively slow. In some countries, academia has no motivation to consider marketization issues, while industry rarely invests in truly long-term, fundamental research, focusing more on short-term returns.

We can rethink how the path from basic science to applied research, and then to real-world products, should be designed in the 21st century to be more efficient and better aligned with social needs.

Chuck: This is a very special historical node. In the past, universities and research institutions could rely on relatively stable government funding; now, especially in the US, this support has changed, making new collaboration models very necessary.

We see this as an opportunity. All parties are exploring new resource integration models. Organizations like SAIR have emerged to support outstanding researchers and cooperate closely with industry partners.

Quantum位: The quality of AI models largely depends on data. Do you think the amount of data in different basic science fields will lead to differences in the difficulty of AI implementation?

Terence Tao: AI tends to progress fastest in scientific fields where high-quality data is relatively abundant.

A typical example is protein folding. This field has seen decades of continuous investment, accumulating a large amount of carefully curated, high-quality protein data.

But in other fields, the situation is completely different. For instance, modeling a single cell might seem like a similar problem at first glance, but we currently do not have data of equal quality or scale.

AI's reliance on data is very high, far exceeding many traditional scientific methods; this is a real bottleneck.

Some hope to use synthetic data to replace real data, but if the generation method is not rigorous enough or the standards are not high enough, it may be counterproductive. Low-quality synthetic data will pollute the original dataset.

Chuck: I completely agree, and I also feel that the difficulty differences between different disciplines are very large.

If we want to solve harder problems in these fields, powerful foundation models are certainly important. But as Terry said, without high-quality data, even the most complex models will struggle.

There is an old saying, "garbage in, garbage out", which is very evident here. This is also why projects like AI x Science are so important.

At the event on February 10th, we gathered some top scholars from different disciplines and institutions. Participants included researchers from UCLA, Berkeley, Caltech, and universities across the US and North America.

Among our keynote speakers was also a recent Turing Award winner and one of the founders of the field of reinforcement learning, Richard Sutton.

We are also promoting exchange among researchers across regions. Promoting AI x Science cannot be done without global participation.

Quantum位: How specifically will SAIR support interdisciplinary and cross-national collaboration among scientists?

Chuck: IPAM (Institute for Pure and Applied Mathematics) and UCLA have actually been doing a lot in this area for many years, and they do it very well. IPAM has a long tradition of organizing interdisciplinary thematic projects and workshops, often spanning long periods and involving a wide range of fields.

I recently went to Singapore and Malaysia, actually to attend a winter academic camp organized by OpenMind. The founder of OpenMind is also Turing Award winner Richard Sutton. The organization mainly targets young researchers from around the world.

Many participants in that event were from Asia, including Singapore, Malaysia, China, the Philippines, South Korea, etc. Everyone gathered together to exchange ideas, discuss more efficient models, and think together about where AI should go next.

This framework of cross-regional and interdisciplinary collaboration is highly consistent with the direction SAIR hopes to support.

Terence Tao: SAIR is just getting started this year, but IPAM has been operating for over twenty years.

One of IPAM's most representative activities is the long-term thematic project, which lasts about three months. In these projects, we invite students, teachers, and sometimes industry researchers from different fields to communicate deeply around a theme, such as heart-related science or autonomous driving.

In fact, we held relevant seminars before the large-scale rise of deep learning.

Although IPAM is not an institution dedicated solely to AI, it has hosted many influential events in AI-related directions. The core philosophy is to bring together groups that rarely have the opportunity to communicate, such as pure mathematicians, applied mathematicians, physicists, engineers, and other scientists.

In the past, this kind of collaboration was more concentrated within academia. Through SAIR, we hope to push this model a step further, strengthening connections with industry while paying more attention to applications that can produce actual impact in the short to medium term.

My own background is still mainly in mathematics, but now we are also understanding mathematics in a broader sense, placing it in an ecosystem connecting theory, computation, and real-world impact.

We hope to try new forms and explore more possibilities for bringing different communities together.

Chuck: IPAM and UCLA have spent decades laying a solid foundation for collaboration, and through SAIR, we can further expand this model geographically and between academia and industry.

The core goal of SAIR is to build on the already effective foundation and expand it into a truly global, interdisciplinary collaboration network that is closely connected to real scientific problems.

Quantum位: Will the development of AI for Science, in turn, affect the way the industry develops and uses AI today? Could it become a superior path towards AGI?

Terence Tao: I think this is a very promising direction.

In mathematics and science, especially in mathematics, many outputs can be formally verified, which gives us a way to constrain AI. Placing AI in a verifiable environment helps reduce hallucinations.

If we can establish reliable, verifiable AI frameworks in mathematics or science, these principles are likely to be extended to other fields.

Currently, in fields like medicine and finance, fully trusting AI still carries high risks. You can use it as an auxiliary tool, but when it comes to life safety or huge amounts of capital, it is hard to放心 (feel safe) handing over control to AI.

If we can first solve the problems of reliability and verification in the scientific field, then these results will have the opportunity to migrate to broader applications.

Chuck: Taking finance as an example, most people are not willing to leave sensitive financial decisions entirely to AI; the same applies to medicine, where errors can directly relate to life and death.

Precisely because of this, AI for Science, and conversely Science for AI, becomes particularly important.

If we can create truly trustworthy AI systems in a scientific environment, we hope these advances can migrate to those critical application scenarios in the near future.

Terence Tao: Exactly. Mathematics and science provide a very safe testing ground for AI.

If AI makes a mistake in a medical or financial scenario, the consequences can be very serious; but if it miscalculates a math problem, at worst you try again, with almost no loss.

This makes mathematics an ideal environment for polishing reliable AI systems.

Chuck: Another advantage is that mathematical research usually does not consume as much computing power as other applications. This allows us to experiment repeatedly more efficiently and explore new ideas at a relatively lower cost.

Terence Tao: This is also why we start from here. Drug development is certainly very important, but clinical trials are extremely costly and have long cycles; investing billions of dollars just to verify an AI method is difficult to achieve.

In contrast, developing and testing AI in mathematics allows us to verify ideas faster and more safely before gradually moving towards those high-risk, high-investment fields.

Quantum位: There is a concern that higher-level research capabilities, such as "taste", should be built upon solid basic training, but AI might be dismantling these growth ladders for young researchers. What is your view on this?

Terence Tao: Now, AI can already complete many tasks that used to belong to the training content of graduate students or junior researchers, such as solving standard problems, doing parts of experiments, or organizing literature.

These things are becoming easier to automate, creating a temptation: since AI is faster, why not let AI do it all?

But the problem is that these seemingly repetitive and even somewhat boring training sessions are very important for personal growth. A large part of my own ability, and that of many senior researchers, comes from this kind of basic work.

So there must be a balance. Even if AI can do it, we must consciously reserve valuable training processes for young researchers. Only after a person has accumulated enough experience, such as personally conducting a certain number of experiments, should automation be gradually introduced.

Chuck: We can already see some trends of over-reliance. Everyone knows AI is not one hundred percent trustworthy, but many people still immediately throw questions to AI to get answers or suggestions directly, which actually weakens the ability to think independently.

This is also why we particularly emphasize involving top scientists. Researchers like Terry, or other Nobel-level scientists, are people who received rigorous training in an era without AI tools.

By establishing a structure similar to an "apprenticeship", allowing experienced researchers to work closely with promising young people, we can gradually form better practices, better models, and also better platforms, supporting innovation without sacrificing learning itself.

Terence Tao: There is actually a very interesting historical analogy here. When calculators first appeared, many people worried whether students would stop learning basic arithmetic from then on.

This concern makes sense to some extent, which is why even today, we still teach children to do addition, subtraction, multiplication, and division by hand before letting them use calculators.

But on the other hand, calculators have greatly expanded the space for exploration. They make it easier for people to experiment with numbers, discover patterns, and explore ideas that were originally difficult to reach.

Tools themselves do not automatically make people weaker; they can also stimulate exploration and creativity. The key lies in how they are used.

Facing AI, we also need to make similar judgments: when to use it, when to restrain, and how to introduce it into the training system without weakening those truly important core abilities.

Quantum位: As AI gradually replaces many existing research processes and skills, what will be the most important abilities and traits for future researchers?

Terence Tao: Future research will increasingly be carried out in the form of larger-scale and more diversified teams. The team may include academic researchers, industry researchers, mathematicians, scientists, as well as AI systems and people from different backgrounds collaborating together.

In this case, how to cooperate efficiently in large teams will become a very important ability.

In the past, people often depicted science as the career of a "lone genius," but in reality, it has long been teamwork, and this trend will only continue to strengthen. Communication skills, and what we often call soft skills, will become increasingly important.

In this context, "taste" is crucial. The ability to form an overall judgment, identify which directions are worth investing in, and then use AI tools or other collaborators to expand on the ideas is very important.

We are likely to see more detailed division of labor than in the past.

Traditionally, especially in the field of mathematics, the way of working has hardly changed for hundreds of years, sometimes even carrying a bit of a "medieval style." One person has to be responsible for checking details, doing calculations, developing ideas, writing papers, applying for funding, and then giving reports.

But in the future, large projects will be completed by many people together. Some will be responsible for long-term vision, some will be good at deep collaboration with AI tools, some will be responsible for team coordination, and some will be responsible for telling the story to more people.

The types of abilities that can contribute to mathematics and science will become much richer.

Chuck: I often half-jokingly call myself a "person aspiring to be a scientist", largely because I have had the opportunity to collaborate long-term with outstanding scientists and mathematicians like Terry.

My background is not in research but more in the business field. But I have heard some very successful people who have built billion-dollar companies say that the best prompt engineers they have seen do not have backgrounds in engineering or computer science, but in accounting and law.

This precisely illustrates that future research will become more open. With the help of AI and new tools, people from completely different backgrounds will have the opportunity to participate in research in meaningful ways. This is also one of the important connotations of what we call "democratization of science and AI."

Terence Tao: Many of the projects I am currently involved in, and the directions SAIR hopes to support, are already highly collaborative. They often bring together professional mathematicians, students, researchers from other disciplines, and sometimes even include public participants.

With the development of AI and related tools, the threshold for engaging in serious scientific and mathematical research is lowering. This is one of the most exciting points of this technological变革 (transformation).

Quantum位: Does that mean in the future, even my younger brother who is still in middle school could publish a paper in "Nature"?

Terence Tao: This is possible (laughs).

In the future, there might be papers with thousands of authors, each contributing a small piece, but all making real and valuable contributions. In this sense, it is not impossible for very young people to participate.

In fact, such examples have already appeared in the field of mathematics: teenagers, with the help of AI, have found new solutions to known problems. It may not be the most significant breakthrough, but it is indeed a new result.

How common this situation will become in the future is hard to say for now; the only way is to continue trying and constantly exploring different research methods.

Chuck: I actually very much hope to see such a scenario. In the past, if you did not have a strong STEM or engineering background, it was almost impossible to directly participate in frontier research.

If people from non-traditional backgrounds can also participate in research in a meaningful way and truly have a positive impact on the world, that would be a very remarkable thing.

Quantum位: Combining our previous discussion, can you give some more specific examples of how SAIR will support the growth of young researchers in the future?

Chuck: This is a particularly good question; honestly, this is something I am personally very invested in. I have been doing work related to mentorship for many years, starting from my university days.

From my observations, the most important point in cultivating young researchers is setting examples.

People look for role models at different stages. When young, they look at their parents at home; when entering school, they look at teachers; later on, they turn their eyes to the broader society.

This is also why we attach such importance to bringing together outstanding scientists from different fields. Every founding member has a very different path to success.

The research experience of Barry Barish is very representative. Einstein predicted the existence of gravitational waves in his early papers, but from the proposal of this theory to the actual experimental observation of gravitational waves, it took nearly a hundred years.

But it was not until around 2016 that humans detected gravitational waves for the first time. Barry Barish thus won the Nobel Prize in 2017 and is currently a founding member of the SAIR Advisory Committee. This example well illustrates what it means to persist for decades.

The value of these outstanding scientists lies not only in their achievements but also in their ability to share how they persisted through uncertainty, setbacks, and failures. This is a very important part of mentorship.

Young scientists do not lack talent; they are just getting started. This is why I attach such importance to doing this with Terry and the entire founding team, because what is needed most now is support for these young people.

Terry is unique, but with the help of AI and better training channels, can the future have not just one Terence Tao, but 10,000 Terence Taos? Isn't that a very exciting thing?!

Terence Tao: Yes. SAIR is just one of many attempts; it cannot cover everything. The needs for supporting the next generation of researchers are very broad; no single organization can independently complete the task of cultivating the entire research team. What SAIR can do is focus on a few targeted projects.

Taking IPAM as an example, we can support summer schools, seminars, and popular science and exchange activities for the public. Some collaborative and crowdsourced research projects will naturally attract young researchers to participate; in some cases, they can even take on leadership roles.

We hope that SAIR can inspire other organizations, encouraging more institutions to step forward and take on important responsibilities in supporting the next generation of scientific research talent.

How Terence Tao Uses AI

Quantum位: Next, I would like to turn the topic to mathematics. Terry, what are you working on recently? Are there any directions you are particularly interested in?

Terence Tao: Currently, I still spend about half of my time on relatively traditional pure mathematics research, which is the kind of work I have been doing for the past 23 years. For example, studying patterns in numbers, understanding the essential differences between highly structured, periodic functions and very random functions, and studying some partial differential equations, such as those from fluid dynamics.

However, in these directions, I am increasingly leaving some of the most cutting-edge and technical advancements to younger collaborators. Correspondingly, an increasingly important part of my own research is beginning to connect with new technologies, especially new ways of "how to do mathematics" and "how to collaborate on mathematics."

One direction I am currently very interested in is formalization, which means no longer relying solely on pen-and-paper proofs but writing mathematics in a formal language that computers can understand and automatically verify. This will profoundly change the way of collaboration. It not only allows us to work with AI systems but also enables us to collaborate with many researchers we do not know.

In the past, if a stranger sent you a proof, you would likely be suspicious of whether it was correct; but if this mathematical content is written in a language that can be formally verified, this concern basically disappears.

With these methods, we have been able to achieve collaboration involving dozens of people in some projects, sometimes even more than fifty, and many of them have never met each other. Everyone can work together to solve big problems that are almost impossible to complete alone.

We are also trying to use AI as a proof assistant, while borrowing many concepts from modern software engineering, such as using GitHub for version control, performing unit tests, conducting quality checks, and so on.

In a sense, I am learning software engineering tools and introducing them into a practice that could be called "mathematical engineering."

For me, all of this is more like a series of experiments. Not every attempt will succeed, but even figuring out what doesn't work is valuable in itself.

Quantum位: It feels like you have always been very willing to introduce new technologies into your research process. It has been half a year since your last interview with Lex Fridman. Has your view on AI changed during this time?

Terence Tao: Overall, the change is not big. But one thing surprised me: the progress of AI in the direction of mathematics is a bit faster than I originally expected, although, of course, there is still a long way to go before it truly matures.

What has changed more obviously is actually the attitude of the entire academic community. I am starting to see more and more colleagues accept the fact that AI will not disappear; it will exist for a long time. Everyone's openness to trying different usage methods has also significantly increased.

However, there is still a lack of a widely understood and recognized intermediate state. Many times, it feels like there are only two extreme choices: either use AI for almost everything, or don't use it at all.

The truly ideal situation should be a hybrid workflow—most research is still completed in traditional ways, but certain links are consciously and controllably handed over to AI. Currently, we have not really found that most suitable balance point.

I often use the internet as an analogy. The internet is very useful, but we don't use it for everything. We still choose to meet friends offline rather than always having video conferences; but in certain scenarios, like this conversation right now, the internet is just right.

Over the years, we have gradually learned when and how to use the internet well. I think, for AI, we are still in the process of exploring this balance.

Quantum位: Terry, you are one of the world's top mathematicians and have long collaborated with many first-class mathematicians. How specifically do you use AI in your daily research?

Terence Tao: Actually, it's quite routine. I mainly use AI for some auxiliary tasks. For example, for literature retrieval; if I can't recall the specific form of a mathematical conclusion at the moment, or its relationship with another result, I will directly ask AI. Also, if I need to quickly draw a graph or make a simple visualization, I will let AI help.

I use it even more for text-related work. I almost always have auto-complete on when writing. Sometimes I will first divide the structure of a paper into five steps, write the first two steps myself, and then let AI draft the remaining steps for me.

So much so that now, if I am on a plane and cannot use AI to write, I occasionally subconsciously think: "Why hasn't it finished this sentence for me yet?", and then I realize AI is not there.

If someone sends me a very long argument or a paper, I also often let AI summarize it for me first. In these aspects, it is indeed a very useful tool.

But when it comes to deep thinking, for example, when I am trying to solve a very difficult research problem, I basically do not use AI. At these times, I still rely more on paper and pen.

I have also tried to directly push research-level problems with AI, but the current experience is not ideal. The suggestions it gives are often quite cliché, and sometimes it even interrupts my train of thought. However, in those auxiliary links surrounding scientific research, AI has become very valuable.

Quantum位: Have there been any new Aha-Moments in the past year?

Terence Tao: Yes, usually it's after thinking about a problem for several months, and suddenly one day realizing: "Oh, it was so simple, how did I not think of it before?"

Before this, you have often tried many paths, sometimes eight or nine methods, all failing. But it is precisely these failed attempts that eliminate the impossible directions step by step, leaving only one truly feasible path. When you finally see that path, looking back, it seems obvious.

This kind of moment is often accompanied by an illusion, as if all those previous attempts were a waste of time. But in fact, it is constant trial and error, constant elimination, that truly lets you understand which directions work. My own mathematical "epiphanies" are still generated this way.

AI currently cannot reproduce this process. It can indeed propose many ideas, but these ideas often appear quite random, and it doesn't seem able to learn gradually from failures and adjust directions like humans do. So far, I haven't been able to truly use AI to directly solve research-level difficult problems.

However, once I already have a clear train of thought or solution, AI becomes very useful. It can help me write up the results, connect them with existing literature, generate code, or provide computational support in certain mathematical links.

In this sense, it is very valuable, but more as a complementary tool. It supports my work rather than replacing the part I care about most.

I think overall, people tend to use AI for tasks they don't enjoy much, leaving the parts they truly like for themselves. For me, the math problem itself is what I enjoy most, and this is the core motivation for my doing mathematics, so I will still do this part myself.

But there are some things I am very willing to hand over to AI. For example, searching in literature to see if anyone has used similar methods before, or filtering out relevant work from hundreds or thousands of papers; for me, this is a very ideal AI usage scenario. Also, some long and tedious calculations are very suitable for AI.

Of course, this varies from person to person. Different researchers enjoy different links, and AI itself is a very broad tool. A function that is particularly useful for one person may not be that important for another. So there is no such thing as the "best model" or "standard workflow."

The key still depends on what you hope AI will help you do, and which things you prefer to leave for yourself to complete.

Quantum位: Mathematics has once drawn nutrients from other disciplines. This time, what has mathematics learned from AI, or what might it learn in the future?

Terence Tao: One thing I am currently learning extensively is software development. Future mathematics may become more and more like today's software development.

If we go back five or sixty years, software was often completed by one person independently: one person wrote the code, tested, debugged, doing everything themselves.

But today, software development has become a whole mature industry. Some specialize in writing code, some do UI, some are responsible for quality control, and there is a whole set of mature workflows, tools, and best practices, along with a wealth of experience from pitfalls.

Mathematics is beginning to learn from this model, including its successes and its failures.

Traditionally, mathematics and physics have been very closely linked. But now, we are interacting more and more with life sciences, social sciences, and other fields. The problems in these fields are often more complex and messy; the equations are not as clean as in physics, and the reliance on data is much stronger.

From this perspective, AI may be very suitable for handling such complex, noisy problems that are not easily formalized.

I feel we are entering a more interdisciplinary era. Mathematics is no longer just dialoguing with physics; almost all disciplines are communicating with each other, and AI is one of the important forces driving this interdisciplinary interaction.

Quantum位: So you are also "doing software" now, right?

Terence Tao: In a sense, yes. I am involved in quite a few projects now, but more and more often, I am more like doing project management. Those who actually prove the theorems are often other collaborators, while I am more coordinating the overall work, piecing different parts together.

This is a quite interesting role. In some projects, I am not the main "problem solver", but rather responsible for organizing and promoting, allowing everyone to bring their abilities into full play. It turns out this way of working can also function quite well in research.

Quantum位: AI has not only lowered the threshold for doing mathematics but also seems to be lowering the threshold for many other fields you just mentioned, such as programming, physics, and medicine. Has AI prompted you to develop an interest in other disciplines?

Terence Tao: Very obviously. One thing that surprised even myself is that the people I collaborate with now have much more diverse backgrounds than before.

Ten years ago, I almost only collaborated with mathematicians, occasionally doing something with statisticians or electronic engineers, and that was it.

But now, I am collaborating with people from various fields, especially people from the industry, like partners such as Chuck. There is really a feeling that everyone has started talking to each other, and in this process, learning from each other.

Researchers from other disciplines can often benefit from a more mathematical way of thinking; while mathematicians can also learn a lot from perspectives closer to the real world.

The reason this can be achieved is largely because we now have many tools, many of which are AI-driven, that help us understand each other's languages and working methods, making collaboration smoother and more efficient.

I think this is one of the truly exciting things at this current stage: barriers between disciplines are lowering, and we are beginning to learn to work together in ways that were difficult to achieve in the past.

Quantum位: How does it feel to work with people from very different backgrounds?

Terence Tao: I actually enjoy this state very much. Of course, I also want to clarify one point first: deep domain experts will always have an irreplaceable position. Those who achieve world-class status in very narrow sub-fields are still very much needed; this has not changed.

The change is that these experts can now collaborate more closely with another type of person. They may not specialize in a specific field, but they are good at connecting ideas from different disciplines and seeing the overall picture.

In these few years, I myself have learned many things that were completely outside my training scope, such as biology, economics, policy, research funding mechanisms, etc. Some content was indeed unexpected, and sometimes quite challenging.

But I also found that some core concepts in mathematics can be naturally migrated to other fields, especially that set of methods regarding verification, rigor, and clear thinking.

For me, this is a continuous learning process, and I really enjoy it. I feel that in the new research environment, those who are willing to maintain an open mind, be happy to communicate across disciplines, and are not afraid to learn new "languages" will find it easier to thrive.

Chuck: What Terry just said actually echoes what we discussed earlier about "technology democratization," especially AI. With current technology, including what SAIR is doing, we are bringing together first-class talents from very different fields.

When you have such a network, things become much easier. It is not only easier to discover truly challenging problems but also easier to quickly judge who is the most suitable person to solve these problems. Sometimes, these people already have part of the answer in hand; sometimes, they can immediately refer you to more suitable collaborators.

In my opinion, this ability to efficiently connect problems and talent is a very concrete embodiment of "AI democratization" in reality.

Terence Tao: Traditionally, research has been organized around disciplines, such as the Department of Mathematics, Department of Physics, Department of Economics, etc. This structure naturally makes mathematicians mainly communicate with mathematicians, and physicists with physicists; true interdisciplinary collaboration is not common.

One point I have high expectations for SAIR is that it deliberately gathered a group of people with very diverse backgrounds and interests from the start. This design itself makes it easier to facilitate some connections that are not easy to appear in the traditional system.

By lowering barriers at the institutional and disciplinary levels, we have the opportunity to promote collaborations that were originally difficult to happen.

In the AI Era, Universities Need New Training Methods

Quantum位: Terry, you just mentioned traditional higher education, which happens to be the topic I want to discuss next. Chuck, in the AI era, which abilities do you think are more important for college students?

Chuck: I have mentored quite a few students over the years. My feeling is that even for PhD students, the number of mentors they cooperate with closely and long-term is very limited, usually three to five, maybe up to ten at most. This actually limits the range of perspectives they are exposed to.

But the situation is changing now, especially in directions like AI x Science. We can more easily bring together different types of professional capabilities. Such problems are naturally interdisciplinary, and AI makes large-scale collaboration from different backgrounds feasible.

In this environment, one ability will become particularly important, and that is critical thinking. Many people like to talk about "prompt engineering," but in my opinion, prompt engineering is essentially another form of critical thinking.

You must be clear about what problem you want to solve, how to articulate the problem clearly, and what kind of answer you really want. If you can't figure these out, AI actually can't help you much.

So, clear thinking, asking good questions, and grasping the core of the problem remain very critical.

At the same time, AI is also lowering the threshold for participation for people without traditional STEM backgrounds. I am an example myself; I did not receive systematic research training, and my background is more in the business field.

But with the help of AI, I can still meaningfully participate in scientific discussions, understand some core ideas, and make contributions. This experience is very powerful.

The future is not just about the distinction between STEM and non-STEM, but about letting people with different skill structures participate in different ways. This is exactly why AI x Science and SAIR are so important.

Terence Tao: In fact, the number of people interested in science and hoping to participate is far greater than the number of people who have received formal research training. And AI happens to expand the range of people who can participate in research.

The future development of science does not depend solely on technical ability, although technology is still important. Organizational ability, communication skills, and the ability to collaborate with others are becoming increasingly valuable.

But at the same time, whether one possesses a holistic vision, knows which problems are worth investing energy in, and understands when to use technology and when to restrain, these are also very important.

Quantum位: The existing university system has existed for a long time. Now with AI, many people feel a major transformation is coming. How do you think higher education should adapt?

Terence Tao: This is a very tricky problem. Honestly, I wish we had more time to slowly figure these things out. But the reality is, we can only "think while walking".

Some worrying phenomena can already be seen. Some students rely too much on AI; their grades look good, but they actually haven't learned much.

There are also some students who insist on learning in a completely traditional way, hardly using AI at all. They often have a more solid understanding, but in terms of efficiency and results, they may lag behind those classmates who use tools extensively.

So, it is obvious that we need to find a new balance point. Schools must teach students how to use AI responsibly and also let them know when not to use it.

I think the future will shift more towards group projects and collaborative learning, which itself is also closer to the real forms of research and industry.

In addition, courses may need to be integrated more closely. The current education system often splits knowledge into relatively isolated professional modules. In the future, perhaps a more holistic structure is needed, emphasizing general problem-solving abilities more.

In the past, students slowly learned how to learn, how to face failure, and how to withstand pressure through homework, exams, and struggling with difficult problems. So far, we have not found a structurally clear and systematic alternative plan for these abilities.

Universities are currently forced to follow many realistic issues, such as maintaining daily operations, guaranteeing graduate funding, balancing budgets, etc., making it difficult to truly stop and redesign the education system from scratch.

Historically, this is not the first time we have faced such an impact. When computers became popular, education changed once; after the internet appeared, it changed again; when Wikipedia first appeared, there was also a period when students directly copied and pasted content for their homework.

Later, everyone found that the solution was not to completely ban new technologies, but to teach students how to use them correctly, taking them as a starting point, not an endpoint.

I think AI is a similar case. It can become a starting point for exploration and research, but it cannot replace thinking itself. Students cannot just ask AI for an answer and paste it into their homework.

The real challenge for higher education lies in how to find that balance point: on one hand, fully leverage the advantages of AI, and on the other hand, do not sacrifice deep learning and true intellectual growth.

Chuck: Often, changes in the industry are faster than in academia. In the AI era, this gap is becoming increasingly clear. And this is exactly where the SAIR Foundation sees the opportunity—to bring academia and industry together, allowing both sides to learn from each other.

From my experience dealing with entrepreneurs, they have a very common trait: strong problem-orientation. No matter how difficult the problem is, they will focus on "how to solve it" and are willing to pay any price for it.

This mindset is what I hope higher education can absorb more of, especially in the background where AI has become a core tool. The university training model should adjust accordingly, allowing students to learn how to use AI to solve real-world problems, not just master fragmented pieces of knowledge.

There is also a big unavoidable issue: cost. In many developed countries, especially the US, higher education is extremely expensive. For some top universities, tuition and related fees for one year approach $100,000, totaling $400,000 over four years.

If this trend continues, especially after AI has provided new paths for acquiring knowledge and skills, people will naturally begin to question: Is a university degree still worth it?

This is also why aligning education more closely with industry has become so important. We need to know more clearly what abilities society really needs and how universities should adjust their training methods to respond to these needs.

In the upcoming series of projects, we will invite leaders from both the industry and higher education to gather together. We are organizing some roundtable discussions, inviting representatives from universities such as the University of Pennsylvania, USC, and UCLA to openly discuss how curriculum systems and training models should evolve to cope with these challenges.

In addition, I am very impressed by organizations like OpenMind. We are also thinking about whether we can conduct similar experiments. Through SAIR, in conjunction with IPAM and UCLA, we are exploring the holding of more intensive projects, such as summer schools.

The benefit of this form is that it allows us to iterate course content faster without being constrained by the traditional semester system, while also being more aligned with the rhythm of AI development itself.

Quantum位: Last question. Terry, if AGI is truly realized in the future and its mathematical ability comprehensively surpasses humans, is it still necessary for us to learn mathematics?

Terence Tao: AGI itself is actually a very vague concept; different people have very different understandings of it.

Let me give an example, transportation tools. In the past, people traveled by walking or riding horses; later, cars and airplanes appeared, with efficiency far higher than walking. But we didn't stop walking because of this, not because we had to, but because we like it, or because it is beneficial to our health.

Science and mathematics may be similar in the future. Even if one day, with the help of AGI, the speed of scientific discovery is far faster than what humans can complete alone, people will still want to do science and mathematics themselves.

It may become more of a craft, a hobby, or an intellectual activity driven by interest, curiosity, and self-satisfaction.

At the same time, I also believe that no matter how powerful AI becomes, humans will continue to create value in ways different from machines.

The way humans learn and reason is very different from AI. AI can draw conclusions through massive data and computation; while humans can sometimes make quite good judgments with extremely little data and very low computation. This ability is likely to remain important in the future.

The scale and method of research may undergo huge changes. Today, a researcher usually solves only one problem at a time; in the future, perhaps thousands or even millions of problems can be advanced simultaneously. Humans grasp a few key directions, and AI fills in the rest.

We are not there yet, but this is a reasonable direction for evolution. Even in that future, learning mathematics still makes sense, although its role and purpose may be very different from today.


分享網址
AINews·AI 新聞聚合平台
© 2026 AINews. All rights reserved.