New work with Brandon Craig (University of Calgary) and Chris Striemer (MacEwan University) highlights the effect of cerebellar injury on attentional tasks - primarily on the reflexive, covert orienting measures rather than the voluntary sustained measures. A preprint is available on BioRxiv.
The increase in open access journals and pre-print archives has led to a marked increase in the volume of scientific literature, but I find myself reading less, not more, and reading differently. The constant onslaught of articles with relevant titles and my inability to keep up has led to a sort of learned helplessness. I just don’t try as hard as I used to; it feels like, “what is the point?” This feeling of resignation is compounded by what I find when I do pick up an article: one or two experiments; a fragment of an idea; and a presumption that I should have read the author’s prior $n$ articles on the same subject and be willing to wait for the piecemeal roll out the next $m$.
The publish or perish model for academic attainment grew out of an era when journals were few, selection standards were high, and the cost of submitting an article multiple times to multiple places was higher. Review cycles might take months. Publication could take a year after acceptance. Getting people to read your work might mean a lot of money spent on reprint fees and postage. Fewer articles were written and they were, in general, more substantial offerings. The return on the investment of time reading was greater. While academic publishing has changed, many of us are still operating under the old reward model, and the number of articles published is burying many of us. There are of course good reasons to publish more: good practice demands that one off studies with negative results or failures to replicate should be visible when we search for them, but there is also a more pernicious consequence: we try to publish anything and everything. The cost to submitting is now small. No more making four copies of your manuscript. Going to the University’s graphic department to have multiple photographic copies made of your figures and illustrations (perhaps needing a professional artist), and sending them off with a postage paid return envelope. Now it is click and submit. Rejected. Repeat. Rejected. Repeat. Many journals no longer require novelty or theoretical import for a submission to be deemed acceptable. All that is technically correct is good enough. With the process so much easier and two publications still deemed better than one why not carve your work into smaller units? Why wait one year or two for the project to run its cycle when you can publish each result as it occurs every three months? Perhaps that might even be a good idea if anyone was able to keep up with reading your work, but the more we contribute to this glut the less any one of our articles is read, and the less the theoretical arc of a project is recognized outside the group of 10 - 30 investigators working on the exact same topic you are from the exact same theoretical perspective. Come across something outré … just too damn busy.
My modest proposal is one, one, one. Do not read more than one article by any one author in any one twelve month period. And further, write as if you expected others to follow this rule. If authors truly believed that having their name added to the author list of some random study because they ran one Northern blot or contributed some reagent actually meant that no one reading that article would read any other article on which they were listed as an author for the next year then they might think twice about being so included. If you believed that people reading your article reporting experiment 1 of a planned five experiments could only read the single experiment follow-up studies on a yearly basis, you might decide to wait and publish all five as a single integrated study that provided a clear motivation, and a well-reasoned analysis of all the data.
How would this work in practice? Easy-peasy. The same tools that make it so simple to submit so much also make it easy to track and count. Before reading any article you enter its doi in a simple computer program that uses that indicator to retrieve the authors and look up whether any of them appear in your “can’t be read yet” list. The deluxe model of these programs will ask if you want to adds them to your queue and will give you a notification when that article becomes eligible. If there are no conflicts, the program assumes you read the article and add the authors and the date to your database. Your guilt is gone. You only read papers that are substantial, and the investigator who publishes 10 times a year gets punished not encouraged for their profligacy.
Getting ready to take off for ECVP in Leuven. I have a poster to present that includes some of the last work Christie Haskell Marsh did before she completed her PhD (she now works as senior data scientists with Johnson & Johnson in their Baby Products Division). In addition to Christie’s data, there is data collected using virtually the same data collected as a replication. I recorded in the readme for the repo of my Haskell parsing code of all the pain I caused myself, but rarely is the easy way the fun way, or the interesting way, or the educational way or the “moral” way. So, I am back on another painful path trying to make a scientific poster that is reproducible. This isn’t about replicability as in “crisis”, this is about making a scientific document that clearly documents what you did and how you did it, and that allows others to repeat your analyses exactly as you did them to produce the poster, manuscript, or blog. Unfortunately, the tools that you need to do this require you to work at it and makes it less likely people will do it. If you like making your poster or figures in powerpoint or illustrator I do not know how you will be able to do this. But if you are willing to spend some time, have some patience, and are willing to compromise a bit on your aesthetic vision, it is not too hard to achieve this goal right now. For the poster in Leuven I wrote the poster as an Rnw file. This is a combination of R and noweb format that allows using R to conduct the analyses and generate the figures while subsequently allowing me to subsequently use LaTeX tools to produce the document I will display. I have done other posters starting with an org file and using org-babel for including the code, but if you are going to have to write a bunch of LaTeX and R anyway the convenience of orgmode is largely absent. Here in short is the basic production line. Do whatever you want to do in RStudio or elsewhere until you have a pretty good idea of the workflow that the poster will need (you can actually include code blocks form other documents, but I did not do that here). Then get to writing your Rnw file. When ready you move over to R and
knit("yourFileName.Rnw"). My current draft of this file is here. The result of this will be a file:
yourFileName.tex. You can change that output if you want, but tex is the default. Then you LaTex the file as many times as you need to, with the tools you have set up to get a pdf version. The benefit of this approach is that if you have my data (and I will be posting this some where publically soon) you can start with my raw file and reconstruct the poster. Don’t like my analysis? Do your own. You will have the exact code I used available to change. So, this doesn’t sound so hard. What is the problem? Well, getting your tools set up. And then there is the fact that if you want to deviate from the established templates or default mode there can be a lot of time on stackoverflow trying to get the tweaks just so. But, it is the right way. Our analyses and our choices should be transparent. Throw away your programs of oppression and free yourself to code your posters. Reproducible scientists of the world unite!
View older posts in the Archive.
subscribe via RSS