What Remains of Learning in the Age of AI
Today I tried something simple but revealing: I used Claude to help me study for exams by extracting the key points from past papers and lecture materials. When I later went back and reread my professor’s slides, I realized that the summary was surprisingly good. It captured the important themes, recurring patterns, and likely exam focus very well. The only weakness was that image-heavy parts of the slides were extracted less effectively.
That experience made something very clear to me: the traditional exam-oriented learning system has been deeply shaken by the arrival of large language models.
In the past, a large part of studying already consisted of difficult but familiar work: finding what mattered, organizing it, summarizing it, and turning a pile of lecture notes and past exams into something usable. Now much of that labor can be outsourced to AI. This does not make learning meaningless, but it does force us to ask a harder question: what is learning once summarization, retrieval, and pattern extraction become cheap?
I do not think AI destroys learning. I think it amplifies the people who already know how to learn.
A student who is curious, careful, and willing to verify information becomes much stronger with AI. Such a person does not stop at a generated summary. They compare it with the original lecture, notice what is missing, question what is emphasized, and continue digging deeper. In that sense, human learning starts to resemble a more powerful form of retrieval-augmented generation. We also retrieve, connect, and assemble knowledge. But unlike a machine, a serious learner can slow down, doubt the answer, reorganize the structure, and care whether something is actually true rather than merely plausible.
That may be the most important distinction. AI can retrieve and compress, but humans can care. Humans can hesitate. Humans can notice the subtle mismatch between what is said and what is meant. That carefulness is not inefficiency. It may be one of the last real sources of value.
This leads to a more uncomfortable question: if AI can summarize, explain, compare, and even generate study plans, then what remains of our value?
For a long time, education rewarded memory, speed, obedience, and repetition. But these are exactly the areas in which machines are becoming dominant. If so, then the value of a person can no longer rest only on how much information they can store or how fast they can produce standard answers.
Perhaps what remains is not raw knowledge, but judgment: the ability to decide what matters, to see structure across different ideas, to pause instead of moving blindly, and to notice details that others miss because they are too busy following the obvious path. In an age obsessed with speed, stopping to think may become a competitive advantage.
This is why I increasingly feel that comparison-based learning is more powerful than endless repetition. Simply doing more problems is not always the best path. Repetition can strengthen recall, but it can also trap the learner inside a narrow groove. By contrast, comparing ideas, methods, mistakes, and perspectives often produces deeper understanding.
In a loose metaphor, traditional rote learning feels closer to an RNN: moving forward step by step, carrying a hidden state, forgetting things along the way. Comparison-based learning feels more like a Transformer: revisiting, relating, and reweighting information across longer distances. One path pushes forward until exhaustion. The other builds meaning through relationships.
This is also why I have begun to question the way many people consume knowledge online. Many follow famous AI educators such as Andrej Karpathy and start to believe that one excellent knowledge base can explain everything. But even the best knowledge base has boundaries. Karpathy’s material is brilliant, but it is still aimed primarily at AI. No single system of explanation is universal. Real understanding begins when we stop treating a source as a final authority and start using it as a starting point for our own thought.
The same question appears in research. Does research still have meaning in a time when AI can assist with reading, coding, writing, and even brainstorming? I think it does, but perhaps not in the way many institutions frame it. Research becomes hollow when it is done only for papers, metrics, or signaling. It becomes meaningful again when it begins with a real question, a small but stubborn problem, something that genuinely resists understanding. In that sense, success may come not from chasing fashionable topics, but from starting with concrete problems and staying with them long enough to see what others ignore.
This thought connects with engineering as well. Whether we talk about agents, models, or AI systems, again and again we return to the idea of a pipeline. Computer science is still about structure, flow, constraints, trade-offs, and design. Yet even here, AI threatens to automate more and more of the pipeline. So how do we prove our value? Not merely by building pipelines faster. Not merely by communicating more fluently. Perhaps our value lies in something quieter: the ability to investigate deeply, to tolerate uncertainty, to stop at the right moment, and to continue asking better questions than the system itself would ask.
At the same time, I have started to feel that memory and intelligence are not the whole story.
We often speak as if the mind were made of two parts: memory, which stores the past, and intelligence, which processes and transforms it. But there seems to be another layer beneath both of them, something harder to define. I can only call it feeling.
This feeling may be partly chemical, partly experiential, and partly something that escapes easy explanation. It does not replace intelligence, but it seems to govern it. It decides what draws us in, what we avoid, what we obsess over, what we find meaningful, and what we can no longer ignore. Memory tells us what we have seen. Intelligence tells us how things work. But feeling tells us what matters.
In that sense, feeling may exceed intelligence without being irrational. It may be the hidden force that gives direction to thought itself.
This is why two people can read the same material, possess similar technical ability, and yet become entirely different thinkers. Their difference is not only in what they know or how well they reason, but in what grips them. What wounds them, what excites them, what they are willing to stay with when the work becomes difficult. Beneath memory and intelligence lies valuation: a deeper layer that assigns weight to the world.
Without that layer, memory becomes storage and intelligence becomes machinery. With it, thought acquires direction, patience, obsession, taste, and even meaning.
Perhaps this is also why AI still feels incomplete. It can simulate memory, retrieval, and reasoning. It can imitate emotional language. But whether it truly possesses that inner pull — that felt sense of significance which organizes a life from within — is a deeper question. Human beings are not only creatures that think. We are creatures that are moved.
I have also started to feel that presentation plays a surprisingly important role in this whole process. Studying overseas, I notice how central presentations are. At first this can seem artificial, almost as if the system is forcing performance onto students. But perhaps there is a deeper reason. Presentation activates the entire system. It forces us not only to consume ideas, but to reorganize them, compress them, and make them alive for someone else. It is not just a communication exercise. It is a form of thinking. To present is to rebuild knowledge into a structure that can move.
So what remains, when knowledge itself is increasingly easy to access?
I think what remains is the learner’s inner architecture: judgment, attention, patience, structure, and the courage to think independently. And beneath even these, perhaps, remains something more elusive: the feeling that gives thought its weight and direction.
In the age of AI, value may no longer belong to the person who memorizes the most, nor even to the one who uses tools the fastest. It may belong to the person who can combine breadth and depth, who can move with both BFS and DFS, who can keep the whole picture in mind while still drilling into detail, and who knows when to continue and when to stop.
AI does not end learning. It exposes what learning really was all along.
Beneath memorization, beyond exams, after all the summaries are generated, what remains is still the same difficult thing: to see clearly, to think carefully, and to build a mind that is truly your own.