The distinction between reading a lot of books, and publishing articles or books of one's own, is crucial in modern academia. A PhD isn't just a bigger and more advanced version of the previous degrees; it doesn't just certify that you have learned some particular curriculum. The key thing that you need to do to earn a PhD is to add something new to the total store of human knowledge. You have to discover something.
It's not necessarily a huge achievement. The thing you discover can be very small—and it usually is. It might take much less time and effort and intelligence to discover your tiny new thing than it would to absorb a large volume of difficult ideas that were previously known; the tiny discovery could still get you a PhD, but no amount of regurgitating previous knowledge will.
In fact it's practically impossible to discover anything new without also mastering a lot of previous knowledge, so PhD students generally do learn piles of old stuff. They tend to be academically talented people with high motivation, and by this point they've got a lot of experience in learning, so they often absorb and organise information at far higher rate than a typical undergraduate can. And since they've been doing that for a while, they probably do know quite a lot. Knowing stuff that other people already know still isn't the actual point. It's just a means to the end of finding out new things. And then when you've found something new, you publish it, so that other people can use it to go on to find more new things.
After managing that once for their final degree, academic professionals are expected to keep on finding new things, and publishing them. Professors aren't simply teachers, passing on the established knowledge to new students. Depending on where we are teaching, in fact, teaching may be only a relatively small part of our jobs. Averaged over the year, my wife and I reckon that only about 25% of our total work time goes to teaching. There's some administration to do, but the bulk of the workload is research.
It hasn't always been this way. The modern PhD degree, and the conception of universities as institutions for research, were only developed in the 19th century. The movement mainly got started in Germany, and then gradually spread. The Bachelor's and Master's degrees are medieval traditions that do not require any original discoveries, and the medieval conception of scholarship was just to master the existing stock of knowledge, not to add to it. The pre-modern thinkers who did find new things were exceptions. And it never used to be a requirement for professors to have PhDs.
In 1903 the famous Harvard psychologist William James (author of
The Varieties of Religious Experience and brother of the novelist Henry James) wrote an
essay decrying the new requirement for faculty PhDs as an octopus whose tentacles were strangling the academic world. C.S. Lewis was a long-time don and college fellow at Oxford, and ultimately a professor at Cambridge, but he never earned a PhD, and his lifetime production of academic scholarship, as opposed to apologetics and fiction, was modest by today's standards.
It's not completely clear why the world has settled on this 19th-century German idea that teaching and research should go together. In principle there could be some synergy, sure. Somebody who is personally working at the coalface of new knowledge may also be able to pass on to students a deeper understanding of previous knowledge than they would get from someone who was just passing on things that they had heard in their turn. On the other hand we don't expect primary and secondary school teachers to be doing original research: we expect them to be experts at the difficult job of teaching. Brilliant researchers are often terrible teachers. So maybe it doesn't really make sense to bundle the two things together the way we now do.
Maybe there are cynical, economic reasons for the research requirement. Higher education is an industry, and if an institution is merely passing on previous knowledge then it's hard to say how it does that better than anyone else, but it can stand out for research because by definition any successes in research are things that nobody else has done. So it may be that universities do research in the way that male peacocks grow big tail feathers, not in order to compete with each other at something important but just as a way to compete. On the other hand some universities make so much money from research grants that you have to ask why they bother with teaching anyone.
Perhaps the best reason why original research and thorough understanding of previous knowledge are expected to go together today is just that previous knowledge is not very good. The wisdom of today only seems impressive from a distance, when you don't understand it. To understand and appreciate it properly is to realise how lousy it is. If you don't feel that you can improve it, at least a little bit, somewhere, then you must not really understand it.
Anyway, it is somehow the standard now. Up until around a hundred years ago, you could totally count as a scholar just by reading a lot of old books, but nowadays the term just does not only mean that. You have to be finding something new, in old books or in an excavation or in the lab or wherever, and publishing it in a book or journal that has been judged by peers in the field to contain worthwhile new stuff.
I was a teenager before it was cool.