Quality vs quantity
A wise man once said that 100 papers is a career. He died with a smile on his face, doing one of the things he liked best. Perhaps I should have listened. I have published 318 papers. You can publish lots if you have a large team of sound assistants and students or a paper-generating mechanism, a model or a data set so rich that it keeps giving. For a while, I had both. There is a danger here: You will lose touch if, year after year, you outsource all data-cleaning and coding. Another variant on the same gets boring.
Preferences matter. I think programming is more fun than editing an assistant’s prose. So, partly by happenstance and partly by design, I am again doing most of my own research: I clean the data, I programme the code, I write the paper. Much like that wise man. I do not play polo.
I still have rich, partly unexplored datasets that no one else has, and a model that can be reconfigured and reconfigured again. Some former assistants and students have become collaborators. New students arrive. New alliances are formed. So, I still publish lots. IDEAS/RePEc puts me ninth. Peter Nijkamp set the tone where I was young.
So, who am I to talk about the quantity v quality trade-off? I recoil when I see that someone published 53 papers in 2025; or hear that someone else signed a contract to publish 100 papers per year. Abramo and d’Angelo (2025) put the threshold for hyperprolificity (in economics and finance) at 1 paper per month. I crossed that threshold more than once.
I understand how you can do a paper or two per week. Recycle parts of papers. Apply the same method to multiple datasets, or multiple methods to the same data. Co-authorship by broad guidance and superficial edits is the fastest way to increase your output. I am reminded of this guy who lost his job because one of his papers contained a major diplomatic faux-pas — which can only mean he had not carefully read any of the draft versions. You should not write faster than you can read.
Students are not helped if you just put your name on their paper. You help them by discussing every stage of their research. You cannot help if you let your own research skills atrophy. This limits the number of students you can advise and so the number of co-authored papers.
Ditto for collaborations. You are a co-author if the paper would have been materially different without your input. Can you do that one or twice a week? No, you cannot. You can lend your name to a paper to boost its chance of publication — but that only works with journals you should not want to publish in.
If you worked with someone before, collaboration can be smooth and efficient. You know their weak spots and so which parts of the paper need to be checked twice. New co-authors take more time. Networks of proper collaboration grow slowly and organically. Networks of superficial collaboration can grow fast.
Current research into hyperactive authors focus on output. I would add two indicators: solo-authored papers, and network formation.
What’s the way forward? Adrian Barnett recommends we all slow down. This is a bit silly. We’re in a rat race. If you slow down, you hurt your career and the careers of your students and post-docs. The academy is too large, too anonymous for Ostrom-like self-regulation. We’re in a bad equilibrium. We need an external shock to get us out.
I am no fan of the UK Research Excellence Framework (REF). It counts your best two papers of the last six years. This rewards slow science and promotes excellence. However, some of the people here are lazy rather than slow. More importantly, the REF punishes sound, applied research. We expect PhD candidates to write three papers in three years. Their professors can get away with just one?
Anyway, the UK is too small to change the world. A substantial part of the problem originates in younger, less well-established national academies, where inexperienced regulators cannot recognize quality and so opt to reward quantity instead. The Chinese authorities worry about involution, excessive competition between manufacturers. Perhaps they should also be concerned about academic involution. They should first do away with cash bonuses for publications and then follow the REF and gradually raise the bar — from one paper per week to one per month to one per year.


Excellent Read!
This piece cuts through alot of academic BS around productivity metrics. The distiction between networks that grow organically versus those built on superficial ties is spot-on. I've seen collab networks where people basically rubber-stamp papers, and the quality drop is noticeable compared to teams that actually grind through every analysis togethr. Would be curious how citation patterns differ between these two types of networks.