-
Notifications
You must be signed in to change notification settings - Fork 37
Description
I have tried to reproduce the results of some of the papers published in ReScience, with the goal of using them as examples in a course on reproducible research. The idea is to let students re-run the code and compare the outcomes, without necessarily understanding much about the contents - the goal of the exercise is to show how computational work can be published reproducibly, and what reproduction actually involves with today's state of the art. But before telling students what to do, I prefer to try it myself.
So far I haven't fully succeeded with any of the three papers I have tried, which makes me wonder if we should encourage authors to provide some level of post-publication maintenance of their submissions. Technically this would be easy, as anyone can open an issue on a repository in ReScience-Archives to report a problem.
Any opinions?