We stand at an inflection point in human history. Artificial Intelligence (AI) has moved from the realm of speculation to omnipresence — impacting significantly how we work and learn. Algorithms today compose music, design software, diagnose diseases, and mimic human reasoning. What began as a search for efficiency has become a new way of seeing and shaping the world.
As machines increasingly acquire the capacity to process, analyze, and even “think”, the critical question before educators is not how to outpace technology, but how to prepare humanity to live wisely with it. If easily available AI applications can effortlessly assimilate existing knowledge from an ocean of resources, what then remains the purpose of institutional education?
The emergence of AI challenges every assumption about work and learning. For centuries, education focused on accumulating and transmitting information — on teaching students what to know. But in an era when information is abundant and often machine-generated, that approach has lost its primacy. The era of rote knowledge accumulation — of swallowing as much information as possible — is gradually coming to an end.
Therefore, with the advent of AI, the crisis we face is not a shortfall of information, but a deficiency of judgment. When a large language model (LLM) can produce an articulate analysis a few thousand words long in a matter of seconds, the human advantage lies not in replicating that skill, but in interrogating it — in asking whether the reasoning holds, whether the evidence is sound, and whether the conclusion is intuitively reasonable.
If LLMs can efficiently compile answers, the educator’s objective should be to teach students how to effectively frame and ask questions. This, and the ability to discern the signal from the noise — is the essence of critical thinking.
Hence, judgment and conscious decision making have become the new essential literacy of the AI era. Critical thinking involves the ability to intelligently frame questions, to identify meaning amid noise, and to recognize where human responsibility must intervene. Our graduates need to be equipped to conduct a forensic examination of information, to evaluate the provenance of sources, and to detect systemic bias in algorithms and in the fundamental assumptions underpinning AI-generated conclusions.
As judgment and critical thinking become valuable skill sets for the AI future, this is precisely where a liberal arts education reclaims its central importance — as the essential, future-proofing curriculum for the age of intelligent machines. A true liberal arts education is a pedagogical architecture built on interdisciplinarity, spanning the humanities, social sciences, and the physical sciences. It allows the engineering student to study Immanuel Kant, the history major to learn astronomy, and the computer scientist to engage with ethics.
If you think of what Nalanda taught in the seventh-eighth centuries or even what Isaac Newton was trained at in Cambridge in the 17th century, there were no hard barriers between subjects. Everybody had to learn some basic subjects — astronomy, law, literature, religion, and mathematics. Specialized subject categories in university curricula were essentially created much more recently, in the mid-19th century, when the purpose was to train people for certain specialized jobs.
In the AI age, we no longer need educational institutions that continue to teach in silos. Education must instead embrace an interdisciplinary approach, where ideas and insights flow freely across fields. Such an approach achieves two vital outcomes.
Firstly, this kind of education trains the mind to connect disparate domains. As AI masters single domains (a specific code base, subject domain or financial model), it is the human who remains uniquely capable of seeing the whole, of synthesizing inputs from technology and society, from people and markets to form holistic and innovative strategies. This is the difference between data processing and wisdom.
Second, it cultivates the capacity for “first principles thinking” — the ability to break down complex problems into fundamental concepts. When AI changes the tools, a liberal arts education ensures the human understands the bedrock principles that govern science, human nature, and political economy, ensuring they can adapt to any new technological paradigm.
By integrating these diverse ways of thinking, the liberal arts and interdisciplinary model nurtures learners who are technically proficient and intellectually balanced individuals, with deep knowledge, critical understanding, and sound judgment. This model of education is quite common amongst global universities but is still at a nascent stage in India. As geopolitical dynamics shift, many Indian students are exploring opportunities for world class education in India. They will increasingly seek Indian universities that offer this interdisciplinary model of learning, which is both globally relevant and AI-ready.
Across Indian universities, students are increasingly using AI tools for their coursework, research, and creative projects. This rapid adoption reflects both the promise and peril of AI in education. While it can significantly enhance productivity and learning, it also raises pressing questions about authenticity, bias, and intellectual integrity. As students learn to use AI, they must also learn to think about AI — to understand its limitations, its ethical boundaries, and the human values that must guide its use. Without this grounding, we risk nurturing technical competence without moral consciousness.
The liberal arts curriculum — with its grounding in philosophy, history, and in scientific and quantitative thinking — is where ethical imagination is forged. It trains students to ask sharper, more meaningful questions rather than merely memorizing answers. For instance, instead of simply learning how an algorithm works, a liberal arts-trained student might ask: “Whose data is being used to train this algorithm, and what biases might it carry?” This kind of education teaches humility, it teaches the complexity of human motivation, and it teaches that great power demands great responsibility. It does not just train future employees, it trains citizens and ethical leaders who will deploy AI responsibly.
Somak Raychaudhury is vice-chancellor and professor of physics, Ashoka University. The views expressed are personal
