In the AI era, university education is undergoing a profound shift in responsibility. The agency of learning, the shaping of cognitive processes, and the final judgment of outcomes are irrevocably shifting from professors and institutions to individual students.
When the cost of acquiring knowledge approaches zero, why should students go to university? And what should universities teach?
This is not merely about cheating or learning efficiency; it's a debate about "cognitive outsourcing" versus "cognitive enhancement," a process hidden within every student's AI query and every school assignment. Ultimately, students must take responsibility for their cognitive choices.
Recently, Anthropic organized a discussion, inviting four students from top global universities to personally describe how AI has permeated every corner of campus.
90% Penetration Rate and a Widespread "Gray Area of Chaos"
First, we need to establish a basic fact: AI on university campuses is no longer a question of "whether to use it" but "how to use it."
In a small survey, LSE student Zayn found that up to 90% of students use AI in various forms within their daily study workflows. From summarizing lectures and solving problem sets to providing feedback on essays, AI has become a foundational learning facility, akin to search engines and Office software.
However, this high penetration rate does not bring order but "a lot of chaos."
Professors' attitudes are not uniform: some courses explicitly forbid it, while others actively encourage it. This leaves students in a vast "gray area." They know the tool is powerful but are unclear about its boundaries.
More interesting is the "identity polarization" effect. Students in humanities and social sciences generally hold more cautious or even resistant attitudes towards AI, emphasizing "close reading" and original critical thinking. In computer science or engineering, using AI-assisted coding for class assignments remains taboo, as professors want students to master foundational "crafts." However, outside the classroom, using AI programming assistants to build practical projects is already an industry standard.
This chaos and fragmentation are typical characteristics of the early stages of technological transformation. The old rule system is crumbling, while a new paradigm has yet to be established.
AI is Leveling the Technical Barriers to "Creation"
The most disruptive impact of AI may not be helping students write essays, but rather dramatically lowering the technical barriers to "creating" and "building."
Zayn mentioned that he, without a computer science background, can now comfortably use the command-line terminal. This was unimaginable in the past. Many clubs around him, which previously had only a basic Instagram page, can now quickly build feature-rich, detailed official websites using Claude.
These builders are not traditional programmers.
This is perhaps the truly noteworthy information. When a humanities or business student can transform an idea from concept to a deployable website or application prototype within days through natural language interaction, it means the power of innovation is being democratized to an unprecedented degree.
The applications students build are also highly imaginative and practical:
- Personalized learning tools: Upload lecture slides, and AI automatically generates detailed explanations, definitions, and background knowledge akin to professor annotations next to each page, preemptively addressing student questions.
- Campus life efficiency tools: A student developed an app called "Coursicle" specifically to monitor hot courses with scarce seats. Once someone drops the course and a spot opens, the system immediately notifies you, allowing you to secure the seat first.
- Resource integration tools: Similarly addressing seating issues, it scans real-time data on all available classrooms across the university, telling students where to go for self-study when the library is full.
The commonality of these cases is that they are led by students without technical backgrounds, created to solve real pain points for themselves and their peers.
The role AI plays here is not as a simple information retriever but as a powerful "execution layer," directly translating student intent into functional entities. This is a new innovation model, a prototype for "citizen development."
AI is a Mirror Reflecting Learning Motivation
In the conversation, Zayn proposed an analytical framework. He believes students typically attend university with three core goals, each with different weights:
- Learning: Deepen understanding of their chosen field.
- Career: Prepare for future work and secure a good job.
- Social: Build networks and enjoy university life.
He pointed out: "How a student uses AI very genuinely reflects their motivation."
- If a student's main motivation is to save time to invest energy in social activities or internships, they tend to use AI to complete tasks directly, treating AI as an "outsourcing tool."
- If a student's main motivation is genuine learning and understanding, they actively use AI as a "cognitive enhancement tool." They use AI to reinforce learning, for example, having AI play a Socratic teacher or asking AI to explain the same complex concept in different ways until they fully grasp it.
From this perspective, the prevalent "anti-cheating" measures in universities, such as AI detection tools, are almost futile. Because the nature of the game has already changed. In the past, schools forced students to learn through exams and assignments. Now, students can theoretically use AI to "get through university without truly learning."
Therefore, the responsibility now lies with the students.
The question of why one attends university has never been so sharp and real.
AI itself is neither good nor evil; it is merely an amplifier. You can choose to use AI to bypass the learning process and obtain seemingly good grades; you can also choose to use AI to assist yourself, achieving unprecedented depth of learning.
The choice, and the responsibility that follows, is entirely handed over to the students themselves.
From "Copy-Paste" to "Intentional Dialogue"
Students' AI usage behaviors are also rapidly evolving. A few years ago, the typical workflow for AI chatbots was "question-answer, copy-paste." Now, students are becoming increasingly "smart" and "conscious."
Marcus's learning process is like this: create a dedicated project in Claude for each course, upload the syllabus, all lecture materials, and readings. Then, within this project, initiate a series of "extended dialogues" around different topics.
In this mode, AI is no longer a mindless Q&A machine but more like a "personal teaching assistant" or "conversational learning partner" with complete contextual memory and understanding of the entire course.
This signifies a shift in the relationship between students and AI from a unidirectional mode of seeking answers to a bidirectional mode of collaborative exploration. Students are no longer passive information receivers but active dialogue initiators and guides. They have learned how to ask more precise questions (prompting), how to set AI roles, and how to dig deeper by asking follow-up questions.
This itself is a new, crucial meta-skill: the ability to collaborate efficiently with powerful artificial intelligence.
The Awkwardness of Universities and the Innovation of a Minority
In this transformation, the response of university administrators and most professors is clearly lagging. Students generally feel that schools are falling behind students in AI literacy and application.
Khloe mentioned that her university's machine learning course developed its own chatbot specifically for answering student questions about course content. This sounds advanced, but she sees it more as a "band-aid solution" because it cannot prevent students from using more powerful general AI tools outside the school to find answers directly.
This "wall-building" approach is destined to be ineffective in an open internet environment.
However, some forward-looking educational practices are emerging, such as:
- LSE's LSE100 course: This compulsory course for all first-year students has completely changed its teaching model. The course explicitly guides students on how to use Claude. It no longer requires students to submit essays but to submit "dialogue logs" with AI. The focus of grading is on whether students engaged in deep, back-and-forth quality dialogue with AI, whether they asked good questions. Finally, students must submit a video presenting their viewpoints. In this model, AI is no longer a cheating tool but becomes part of the learning process itself, assessing higher-order thinking and expression skills.
- Arizona State University's (ASU) support system: ASU takes a very proactive and embracing stance towards AI. The university's career management center has created a "prompt corpus" for students, providing high-quality prompts for various job-seeking scenarios. The school has also launched a new course titled "Artificial Intelligence Strategy and the Future of Work," which became permanent due to its popularity.
Although these cases are still in the minority, they point to the future direction of education: moving from blocking AI to guiding and leveraging AI. The core of education is no longer assessing whether students can "memorize" knowledge but assessing whether they can use AI as a powerful tool to "explore, integrate, and create" knowledge.
Job Market Anxiety and New "Entry Tickets"
AI's impact on the job market also brings students a dual sense of anxiety and opportunity.
On one hand, the job search process is becoming increasingly "dehumanized." Khloe complained that she spent almost her entire recruitment season "talking to screens." Many companies use AI to screen resumes, and initial rounds of interviews are conducted by AI systems. You answer preset questions facing a screen, never meeting a real person. This makes the job search process feel like opening blind boxes, full of uncertainty. Tino also complained that a carefully prepared resume might receive an AI-generated rejection letter within 15 minutes, which is demoralizing.
On the other hand, "AI fluency" is rapidly becoming a new entry ticket into top industries. Tino mentioned that top consulting firms, which once hired generalist MBAs, now explicitly seek MBAs with AI fluency. Whether you understand how to apply AI across different industries to solve business problems has become a core competency.
This means that whether students consciously and deeply use AI during university and can clearly articulate their experience in using AI to solve complex problems will directly impact their career prospects. In the past, programming skills were a plus for many non-technical roles; in the future, the ability to collaborate efficiently with AI will become a necessary skill for all knowledge workers.
Farewell to "Nanny-Style Education," Welcome the Era of "Learner Responsibility"
This discussion about AI on campus did not slide into a technologically deterministic "doomsday theory" but concluded with a "thoughtful and positive attitude." The students' consensus is: We'll figure it out.
This confidence stems from their adaptability as digital natives and from their passive or active assumption of more responsibility in this transformation.
Looking back at the entire conversation, we can see a main thread: The core of university education is shifting from "granting knowledge" to "cultivating abilities," and the emergence of AI is accelerating this process with unprecedented force.
Universities can no longer position themselves as closed knowledge vaults because knowledge has become ubiquitous and easily accessible through AI. Professors can no longer merely act as transmitters of knowledge.
The core value of future universities will be reflected in three aspects:
- Guiding Motivation: Help students find and strengthen their true learning motivations, not just for grades and diplomas.
- Teaching Methods: Teach students how to collaborate effectively with AI, how to ask good questions, how to critically evaluate AI-generated content, and how to use AI as a tool for innovation and creation.
- Providing Environment: Create a physical and academic environment that encourages trial and error, encourages cross-disciplinary collaboration, and provides high-quality human interaction (precisely what AI cannot replace).
As Zayn said, universities need to "trust students." They will make mistakes; some will submit 100% AI-generated garbage. But they will also learn through trial and error to discern what is truly beneficial for them. This process of learning and maturation cannot be replaced by any rules or technological means.
A new era of "learner responsibility" has arrived. In this game, the biggest variable is not the capability of AI models but the intention, curiosity, and self-drive of the person in front of the screen. For those prepared to take responsibility for their own cognition, this is undoubtedly the best of times.