The Guardian is a UK newspaper with a hard-left viewpoint. Generally, they do "science" well. That is, their reporting is well-written, balanced, and clear on scientific breakthroughs, controversies, etc. I read their work on global warming/climate change knowing that they will be thoroughly in the "climate change" camp.
On the other hand, some of their opinion pieces are hilarious examples of whingeing (for Americans, that's whining), special interest pleading, misinformation peddling, inadequate research, and other specimens of lunacy. I have talked about two of the previously. This is not real goofy, but it has its moments.
For a start, not every publication is equal. When the author Theodore Sturgeon first invoked his famous law ("90% of everything is crap") he was talking about science fiction, but he could equally well have been talking about science. Estimates vary wildly, but probably between a quarter and a third of all research papers in the natural sciences go uncited. A much larger proportion is cited only by their own authors or by one or two others.I, and many of my fellow MS students, had to "publish" our masters' research. Most of us also published at least the abstract to our master's degree research paper in another publication. Many science publications will publish the "proceedings" of a large-scale science meeting (we took over the Virginia Tech campus for a week one summer). The issue is filled with short summaries (or "abstracts") of all the papers presented, even if the "presentation" is one poster board in a room of them.
[And I hope, to God, mine is never cited.]
But, so what? Why is this a problem?
People have to start somewhere. And part of the purpose of publishing is to let people know what has been done.
How does anyone know which publication is a piece of "crap" beforehand? Or even for years afterwards? Gregor Mendel's foundational work on genetics was "lost" for over 20 years. Up until the moment someone recognized its worth, it was "a piece of crap." Nobel Prize winner, Barbara McClintock, was ignored for over 30 years before her work was recognized.
In general I am a supporter of open access, but subscription business models at least help to concentrate the minds of publishers on the poor souls trying to keep up with their journals.This is a classic gatekeeper problem. The author supposes, or fantasizes, that an all-knowing gatekeeper will be able to weed out the useless and find the diamonds. There is a similar problem going on in science fiction right now. The SF gatekeepers are despairing that SF is a dying genre, while those who publishing independently are finding readers, and, for some, success. (As a side note, one of the largest boosters of SF gatekeeping publishes with the Guardian.)
As if this were not enough, proponents of open science (including me) are proposing that researchers should start publishing all of their work – complete with full data sets, comprehensive methods, negative results and "failed" experiments.Agreed.
The only practical solution is to take a more differentiated approach to publishing the results of research. On one hand funders and employers should encourage scientists to issue smaller numbers of more significant research papers. This could be achieved by placing even greater emphasis on the impact of a researcher's very best work and less on their aggregate activity."Significant;" but again, who determines this?
...Less significant work should be a issued in a form that is simple, standardised and easy for computers to index, retrieve, merge and analyse. Humans would interact with them only when looking for aggregated information on very specific topics.
What would such a publication look like? I don't know exactly, but we can see signs in born-digital data publishing platforms...I worked for a "biological information service" in the 1970's and 80's. We read abstracts and indexed the articles. Our work was both published in a bound form and also was digitized and available for computer retrieval even in that early and benighted period.