Cross-posted from Science 3.0/EvoMRI
On Twitter, Mary Canady asked whether there are any blog posts on the relationship between the SciFund Challenge and Open Science. As I had already started drafting this post, I mentioned that there should be one up soon, and I reframed it a bit to match that perspective, pointing out in the meantime that a blog post on the relation between SciFund and Creative Commons already exists. With a bit of delay due to server problems, here we go now.
One thing at a time, though: What is SciFund? A “Kickstarter for science“, “the model charities have always used (bundling lots of donations to do good works), with an internet/social media twist“, “the first concerted effort at crowdfunding research“, plus an intensive course on science communication between researchers and the public.
As part of that, participants are invited to write blog posts on a number of questions, and the theme for this week is “If this was how science funding worked in the future, would that be OK?” If this were a polar question, my answer would be “No”, but so it would for the currently prevailing traditional modes of funding, so digging a little deeper is probably in order.
The SciFund homepage provides answers as to what the SciFund Challenge is, who participates, and when and where it takes place – somewhat implying how. From an evolutionary psychology perspective (cf. Tinbergen), why questions are typically more interesting, so missing out on them raises a flag, and others were also wondering. Fortunately, two participants have already blogged their views on that, which boil down to eligibility criteria for classical research funding (including age), budgetary constraints on science funding in general, ”always working on someone else’s projects and and they reflect someone else’s creativity“, plus a desire to engage with the public and to have fun doing so. I share all these views (and so did others) but would like to add some different ones, which are the main drivers behind my participation.
There is little empirical evidence on the effects of grant giving peer review. No studies assessing the impact of peer review on the quality of funded research are presently available. Experimental studies assessing the effects of grant giving peer review on importance, relevance, usefulness, soundness of methods, soundness of ethics, completeness and accuracy of funded research are urgently needed. Practices aimed to control and evaluate the potentially negative effects of peer review should be implemented meanwhile.
Running such a missing test of grant peer review efficiency is an idea that I had been pondering for a while – initially without knowing about the Cochrane study – since I have had ample opportunity to see good research proposals rejected, with no apparent reasons why others went through. Similarly, a recent commentary by John Ioannidis in Nature started out with
The research funding system is broken: scientists don’t have time for science any more.
and went on to mention that
the research behind 30% of the pivotal papers from Nobel laureates in medicine, physics and chemistry was done without direct funding
the imperfections of peer review mean that as many as one-third of current grants are effectively being awarded at random
It is a scandal that billions of dollars are spent on research without knowing the best way to distribute that money.
On the occasion of a 2009 study aptly entitled “Cost of the NSERC Science Grant Peer Review System Exceeds the Cost of Giving Every Qualified Researcher a Baseline Grant“, I did not just blog my comments but also set up a draft for a grant proposal but it never got submitted because the confidential and informal reply to my pre-submission inquiry stated clearly that this kind of research would stand no chances of being funded through this “Europe and Global Challenges” scheme.
I gave it another try last year when I submitted a proposal in response to a call for “[p]roblems that, if solved, would advance the knowledge and capabilities in an area of your research” (link now rotten) but the jury chose another project, as usual behind closed doors. I had also approached several funders – VolkswagenStiftung, DFG, ERC, ESF, Wellcome Trust, European Commission – as to whether they wouldn’t have an interest in funding such a study on the efficiency of the way they distribute their funds, but I am too small a fish in those ponds to receive any more than some politeness in response.
So when I heard of the SciFund Challenge, I immediately signed up and thought I would submit something along these lines, hoping that the problem of the missing test of this crucial element of the research system might get a bit of attention this way. But as the preparations for the SciFund launch on November 1 unfolded, I became aware that the budgets likely to be in reach of such a crowdfunding endeavour would not make for a good case in comparison to the billions spent the classical way, and I realized that the group of finally 49 projects would in itself represent such a test: whatever the amount finally raised by the SciFund Challenge, there will be very few classically funded research projects that could be said to have a higher impact in terms of communicating science to non-scientific audiences (I’m a Scientist comes to mind but this isn’t classical research funding either), in particular before the start of the funding period. The current total is embedded below (courtesy Jarrett Byrnes).
This brings us back on topic – the graph is updated on an hourly basis, so it allows to follow the process of the overall SciFund budget evolving. A key aspect of Open Science is to make the processes visible, so as to allow others to follow, comment, contribute, replicate or otherwise engage with the research. Open scientists also write grants in the open. It was thus a logical fit to have Jai Ranganathan present the SciFund Challenge at the Open Science Summit last month (three initiatives at microfunding science had been presented there last year). I followed the event remotely (my talk is here) and enjoyed Jai’s talk very much, so here it is again:
Given that SciFund takes place in public, it is in effect a public dataset that could, in itself, be used to study correlates with and possibly even causes of successful crowdfunding campaigns for research. The dataset comprises the graph above, along with its counterpart for all the individual projects, and all the blog posts, tweets and other public mentions of the initiative or its projects. However, very few of these items are clearly labeled as being openly licensed, so any reuse comes with the pitfall of potential copyright problems later on. In fact, the Panton Principles for the sharing of scientific data recommend that data be put into the Public Domain, so as to avoid such problems right from the start.
Beyond data, I keep saying that all the steps in the research cycle can in principle be performed in the open, the exception being funding decisions, and here we are at SciFund again – everyone can easily find out which projects are part of the competition, how much money they raised (and from whom) and whether they met their target. For the situation in classical research funding (where normally only the winners are announced), let me quote an old post on the matter:
Science funding, in a sense, is like some kinds of sports (think figure skating) in that decisions are being made by a committee. However, the science funding committees evaluate only planned choreographies (though they take into account edited records of actual performances in the past). What is more, participating athletes (let alone the public), and often even committee members do not know each other’s identity, and the whole process of selecting a winner is secret. What would you think of a sports champion elected that way? Or, the other way round, wouldn’t it be interesting to be a spectator in the science funding sports, rather than reading hype-cycling “scientists found out” reports? After all, this is supposed to be a venue for creativity and sharp minds, both of which stand good chances of attracting attention.
Science funding is also supposed to spur innovation (like combining ice skating and elements of ballroom dance to what is now known as ice dancing). But if ice dancing has never been performed before, people who are experienced in either ice skating or ballroom dance, or in areas yet further away, are to decide whether such a new kind of choreographies stands any chances of winning in future major championships. How good are their chances to perform well on this task? Well, I don’t know, but they are certainly drastically slimmer than those of identifying which of a set of published articles (which contain the edited records of ice dances whose original choreographies got indeed funded) are going to be considered a masterpiece three years later. This latter experiment has actually been performed by Wellcome Trust researchers, and although the committee members performed well overall in their predictions, they missed a lot of jewels too.
So SciFund is important for Open Science because it fills the last major gap in the open version of the research cycle with some initial data. Perhaps not surprising, this is where the SciFund project comes in that I have submitted in collaboration with Fabiana Kubke at the University of Auckland. Its outlines have already been presented here on this blog in a series of posts in spring, but we found that the label of a GitHub for science that Marcio van Muhlen used for a closely related idea might not fly with the general public, and so we framed the project as Communicating research the Beethoven way, alluding to the following quote from a letter Beethoven had sent to one of his funders:
There should be only one repository of art in the world, to which the artist would donate his works in order to take what he would need
In short, we plan to get away from the current state of the scientific literature with its millions of static stand-alone articles describing research from months or years ago, take a suitably licensed (CC BY) Open-Access slice of it and turn that into a set of evolving and interlinked articles that reflect both the history and the state of the art for their topic and can be adapted immediately as research proceeds in a way readable by humans and machines. Such a system unites aspects of electronic notebooks, libraries, archives and museums and implements the Criteria for the journal of the future as well as the Five Stars of Online Journal Articles.
This would allow any researcher, research funder, patient, teacher or lay person to stay abreast of research on any topic and to interact with others that share their interests. In order to be useful for researchers right from the start, integration with workflows all along the research cycle is key, albeit the publishing step will be steadily decomposed on the way towards micro- and nanopublications (thereby making the least publishable unit obsolete). Perpetual public peer review will obviously be facilitated by this system, data and code shall be integrated with classical text and multimedia content, forking shall be easy, and we are pondering ideas like using automated searches over semantically enhanced content as the basis for identifying knowledge gaps and proposing new research projects.
Getting all of the above crowdfunded in this first go was not realistic, so we chose to scale the project down to a prototype with just a few evolving review articles, focusing on the development of the software and its application to a few expected use cases, and budgeted accordingly. Given the workings of RocketHub, it may have been a better crowdfunding strategy to go for an even more scaled-down version – with a smaller funding goal – and to prepare for funding overshoot, but this would have made the proposal less credible. Public proposals like those in the SciFund challenge (especially if they provided a bit more detail) would expose weaknesses of this kind much more than the traditional non-public proposals. But then again, I would personally favour a system in which the appropriateness of budget use would by default be judged after the research has been performed, rather than before – with severe penalties for misuse of funds, and with special provisions for projects that require pre-research investments in infrastructure that is not commercially available.
Entering an Open Science project into the SciFund Challenge also affects the range of rewards that can be offered to supporters of a project: while many projects offer privileged information to supporters, there is no such thing to offer in projects run entirely in the open. Open projects could in principle offer more opportunities for the public to interact with the research, but RocketHub – the crowdfunding platform through which SciFund projects are seeking support – does not really seem conducive to engagement beyond financial transactions. For instance, very few if any fuelers comment on the projects they funded – intuitively, I would expect way more comments than money transfers.
Are SciFund participants themselves engaging in Open Science beyond putting their research proposals out here in public? Well, at least Lee Worden (Mathematics of direct democracy) has an open notebook and released his SciFund video under CC BY, and I know that many of them have published in Open-Access journals. For instance, Shermin de Silva (with whom I am collaborating on vocal production in elephants, sadly in a rather non-open project) of Elephant research was the lead author of a paper on the dynamics of social network in groups of female Asian elephants, which is licensed CC BY and could well serve as the seed of an evolving review article on the topic. Similarly, Patty Brennan published a CC BY-licensed paper that laid the foundation for her SciFund project “Force of Duck“, and Zen Faulkes (Amazon Crayfish) has at least one CC BY paper too (his video is under CC BY but I don’t think that is appropriate, as he was using screenshots of materials that were copyrighted by others).
Have we now covered the topic of interactions between SciFund and Open Science? Not quite. For instance, I only mentioned that having proposals out in the open facilitates peer review, but haven’t gone into that and indeed will leave it for another post (for preview, take a look here and here). Another aspect as yet untouched is how the open materials produced for and throughout the challenge can be reused later (e.g. in teaching) or how the experiences gathered through the initiative can be integrated with advice on getting grants.
With regard to the actual SciFund blogging theme of whether the SciFund approach could be a model for future research funding, I would like to expand a bit on my “No” from the introduction by quoting from the Ioannidis commentary cited above, which does not mention crowdfunding (nor does the Cochrane review, by the way) but puts the classical funding schemes into a wider perspective:
Although detailed proposals may be indispensable for some projects, such as rigorous clinical trials and large-scale collaborative research, ideas abound for more efficient ways to fund general research. Some organizations are already experimenting. Multiple options could co-exist, with portions of the budget earmarked for different schemes.
I will leave the details to a later blog post too – for preview, take a look here.