MissingECVP

Unfortunately a family emergency has forced me to cancel my trip to ECVP 2015. This is too bad for a number of reasons, but especially because I was looking forward to comments and feedback on the poster describing my work on the controversial issue of whether cueing actually causes contrast enhancement. The answer I find (and which I am in the process of writing up now - let me know if you would like to see the draft manuscript) is that the answer is yes and no. It depends on how you elicit the contrast report. It seems like this is an example of a "multiple drafts model."

A second reason for my disappointment is that it doesn't give me a chance to share my experience with trying to make a poster that embodies reproducible research. This poster was created as an Rtex file (file is available here and it does include some hardcoded variables and paths). Using a combination of the R package knitr and good old LaTeX you can completely recreate the graphs and analyses shown in this poster. Although everything could be self-contained, including the data, I find it a better workflow to break things apart a little bit. I put the R code in a file by itself (here it is called fd.R). This makes it easier to test and debug the code. I also could have put the data in the Rtex file itself, but that gets a bit unwieldy.

The basic workflow is to pull in all the R blocks with a read_chunk("./fd.R") command. Then you use the parts you want with R code blocks that look like:

%% begin.rcode <optional arguements here>
% <stuff here>
%% end.rcode

In addition you have to use all the usual LaTeX stuff to make the actual poster. I use beamerposter, but this brings in its own learning curve. The final work flow is to open up R and knit("brittPost.Rtex") which creates brittPost.tex, then to this file you apply LaTeX to using whatever your usual method is.

Is it worth all the work? I think yes, but do not underestimate the work or the headaches. At the end, if you tweak the data a bit (e.g. you find you need to add or eliminate a participant) you can just re-compile the file and everything gets updated exactly. Also, if you have someone interested in the analysis you can share with them the code you actually used as you actually used it. Even more important than sharing with someone else may be the benefit you get from being able to go back and see exactly how you made those figures if you want to do it again and see where you stored the data and where you called it from. The first time, when you start with a blank page is always the hardest. After that you are just editing and changing a working version, and each repetition gets easier. I have also tried using orgmode and emacs for this purpose, but so far that is a bit trickier. I especially do not like the inlining of R output in the orgmode version. It requires much more updating by hand for every subsequent change. For right now, Rtex and knitr is the way to go.

If anyone reading this tries this approach, please let me know how it goes. I am especially interested in learning tips to make this reproducible approach to research easier, so don't hesitate sharing your experience. - let me know if you would like to see the draft manuscript) is that the answer is yes and no. It depends on how you elicit the contrast report. It seems like this is an example of a "multiple drafts model."

A second reason for my disappointment is that it doesn't give me a chance to share my experience with trying to make a poster that embodies reproducible research. This poster was created as an Rtex file (file is available here and it does include some hardcoded variables and paths). Using a combination of the R package knitr and good old LaTeX you can completely recreate the graphs and analyses shown in this poster. Although everything could be self-contained, including the data, I find it a better workflow to break things apart a little bit. I put the R code in a file by itself (here it is called fd.R). This makes it easier to test and debug the code. I also could have put the data in the Rtex file itself, but that gets a bit unwieldy.

The basic workflow is to pull in all the R blocks with a read_chunk("./fd.R") command. Then you use the parts you want with R code blocks that look like:

%% begin.rcode <optional arguements here>
% <stuff here>
%% end.rcode

In addition you have to use all the usual LaTeX stuff to make the actual poster. I use beamerposter, but this brings in its own learning curve. The final work flow is to open up R and knit("brittPost.Rtex") which creates brittPost.tex, then to this file you apply LaTeX to using whatever your usual method is.

Is it worth all the work? I think yes, but do not underestimate the work or the headaches. At the end, if you tweak the data a bit (e.g. you find you need to add or eliminate a participant) you can just re-compile the file and everything gets updated exactly. Also, if you have someone interested in the analysis you can share with them the code you actually used as you actually used it. Even more important than sharing with someone else may be the benefit you get from being able to go back and see exactly how you made those figures if you want to do it again and see where you stored the data and where you called it from. The first time, when you start with a blank page is always the hardest. After that you are just editing and changing a working version, and each repetition gets easier. I have also tried using orgmode and emacs for this purpose, but so far that is a bit trickier. I especially do not like the inlining of R output in the orgmode version. It requires much more updating by hand for every subsequent change. For right now, Rtex and knitr is the way to go.

If anyone reading this tries this approach, please let me know how it goes. I am especially interested in learning tips to make this reproducible approach to research easier, so don't hesitate sharing your experience.

Date: 2015-08-20 Thu 00:00

Author: Britt Anderson

Created: 2024-05-02 Thu 03:14

Validate