Wednesday, April 15, 2015

Co-publishing with Against the Grain

I recently agreed to co-publish four interviews with the ATG NewsChannel. The first of the Q&As is with librarian Marcus Banks, and can be read on the Against the Grain website here (or in the post below).

Marcus is a former editor-in-chief of the open access journal Biomedical Digital Libraries, a journal that had to cease operations in 2008.

Amongst other things, Marcus discusses the lessons he learned from his experience with Biomedical Digital Libraries, the economics of OA publishing, and the possibility of journals evolving into blogs.

Against the Grain publishes news about libraries, publishers, book jobbers, and subscription agents. Its goal is to link publishers, vendors, and librarians by reporting on the issues, literature, and people that impact the world of books and journals.  

Tuesday, March 31, 2015

The Life and Death of an Open Access Journal: Q&A with Librarian Marcus Banks

Librarians have been at the forefront of the open access movement since the beginning, not least because in 1998 the Association of Research Libraries (ARL) founded the Scholarly Publishing and Academic Resources Coalition (SPARC). Today SPARC is arguably the world’s most active and influential OA advocacy organisation.
Marcus Banks
It is important to note that librarians’ interest in open access grew primarily out of their frustration with the so-called “serials crisis” — the phenomenon that has seen the cost of scholarly journals consistently grow at a higher rate than library serials budgets.

SPARC’s initial strategy, therefore, was to encourage the growth of new low-cost, non-profit, subscription journals able to compete with the increasingly expensive ones produced by profit-hungry commercial publishers. As SPARC’s then Enterprise Director Rick Johnson wrote in 2000, “In 1998, after years of mounting frustration with high and fast-rising commercial journal prices, a group of libraries formally launched SPARC to promote competition in the scholarly publishing marketplace. The idea was to use libraries’ buying power to nurture the creation of high-quality, low-priced publication outlets for peer-reviewed scientific, technical, and medical research.”

In the wake of the 2002 Budapest Open Access Initiative (an event attended by Johnson), however, SPARC began to focus more and more of its efforts on open access. The assumption was that this would not only allow research to be made freely available, but finally resolve the affordability problem faced by the research community. As the BOAI text expressed it, “the overall costs of providing open access to this literature are far lower than the costs of traditional forms of dissemination.”

Ironically, despite their high profile advocacy for open access many librarians have proved strangely reluctant to practice what they preach, and as late as last year calls were still being made for the profession to start “walking the talk”.  

On the other hand, many librarians have embraced OA, particularly medical librarians. In 2001, for instance, the Journal of the Medical Library Association (JMLA) began to make its content freely available on the Internet. And in 2003 Charles Greenberg, then at the Yale University Medical Library, launched an open access journal with BioMed Central called Biomedical Digital Libraries (BDL). One of the first to join the editorial board (and later to take over as Editor-in-Chief) was Marcus Banks, who was then working at the US National Library of Medicine.

Four years later, however, BDL became a victim of BMC’s decision to increase the cost of the article-processing charges (APCs) it levies. This meant that few librarians were able to afford to publish in the journal any longer, and submissions began to dry up. Despite several attempts to move BDL to a different publishing platform, in 2008 Banks had to make the hard decision to cease publishing the journal.

What do we learn from BDL’s short life? In advocating for pay-to-publish gold OA did open access advocates underestimate how much it costs to publish a journal? Or have publishers simply been able to capture open access and use it to further ramp up what many believe to be their excessive profits? Why has JMLA continued to prosper under open access while BDL has withered and died? Was BDL unable to compete with JMLA on a level playing field? Could the demise of BDL have been avoided?  What, if anything, does the journal’s fate tell us about the future of open access?

I discuss these and other questions with Banks below. The issue of affordability, it seems to me, is particularly apposite, as librarians are having to confront the harsh truth that, far from reducing the costs of scholarly communication, open access appears more likely to increase them.

It turns out that Banks has an interesting perspective on this issue. As he puts it, “At the risk of frustrating many librarian colleagues, I must say that the framing of open access as a means of saving money has been and remains a serious strategic error.”

He adds, “A fully open access world may not save any money and could cost more than we pay now — this world would include publication charges as well as payments for tools that mined and sorted the now completely open literature. That’s fine with me, because in this world we’d be getting better value for money.”

The interview begins …

RP: Can you say something about your background and career to date?

MB: I have been a librarian since 2002. My first position after earning my Masters of Library and Information Science was as an Associate Fellow at the US National Library of Medicine (NLM), from 2002-2004. During this time NLM was developing PubMed Central (PMC) as a freely accessible digital archive of biomedical literature.

Growth at PMC was slow, as deposits to it were voluntary — this was years before PMC became the required repository under the terms of the NIH Public Access Policy. Publishers rightly worried that a fully open access archive would challenge their business model, a concern that persists today.

Watching this debate unfold raised my awareness of the various agendas in scholarly publishing, as well as of the potential for open access publishing to expand the reach of biomedical literature.

RP: What are you doing currently?

MB: My most recent position was as the Director of Library/Academic & Instructional Innovation at Samuel Merritt University in Oakland, California. Since then my wife and I have returned to the Chicago area for both personal and professional reasons. I am currently pursuing employment while building a consulting practice devoted to transformation in scholarly communication. Even with “gainful employment” I would continue the consulting.

RP: You said that the growing debate about scholarly communication made you aware of the potential for open access publishing. You were later involved in the creation of an open access journal called Biomedical Digital Libraries, which I think was launched in 2004 but ceased operations in 2007. Can you say what your role at the journal was, why the journal was created, and why it did not succeed?

MB: Charles Greenberg, then at the medical library of Yale, launched Biomedical Digital Libraries (BDL) at the Medical Library Association meeting in May 2003. It was an open access title published by BioMed Central (BMC). His first task was to recruit an editorial board, and I joined in as an Associate Editor. Our first papers appeared in 2004. As Charlie moved on to other projects, I became co-editor and then sole Editor-in-Chief in 2006.

Thursday, March 26, 2015

UCL Vice-Provost comments on the Independent Review of the Implementation of the RCUK Open Access policy

Guest Post by Professor David Price, Vice-Provost (Research), University College London

David Price
Research Councils UK (RCUK) has today released the Report of an independent review body on the implementation of its Open Access policy.

It is not a review of Open Access policies and their implementation in the UK. The Report is quite clear about this – it is a review of the impacts of the implementation of the RCUK Policy on Open Access for its funded research outputs. This is a review which is being undertaken at an early stage in the history of that OA policy. As such, there is much that is good and helpful about the Report’s findings and I will touch on some of these points below.

Overall, however, the Report is a missed opportunity to look at the deeper implications of the move to Open Access in the UK. There are broader issues, in many of which RCUK is a leader, which would have benefited from a more confident treatment by the panel. There is still a great deal of work to do!

The Report looks in some detail at the question of embargoes. While the short embargoes of 6 and 12 months have been taken up by the research community, there is still unhappiness. As the Report says, some of this is due to poor communication of the policy and resulting confusion in the academic community. Another aspect of it, however, is a genuine concern among some communities, for example History scholars, that short embargo periods are harmful to academic freedom to choose where to publish. RCUK needs to look at the issue of embargo periods again.

The Report also highlights a number of problems with the RCUK recommendation of a CC-BY licence for research outputs. If this is the RCUK position, then compliance with the policy would require academics to use this licence. In its review of policy implementation, the Report shows that this has not always been the case. The Report also, quite rightly, highlights the unhappiness of the Arts and Humanities community in the requirement for a CC-BY licence. From the evidence presented, it looks as though this community feels they are being made to dance to a biomedical and scientific tune, where CC-BY is more acceptable. The Report is right to highlight the need for further investigation.

The Report has further nuggets of wisdom. It highlights the administrative costs for universities of implementing the RCUK Open Access policy, building on the London Higher Report supported by SPARC Europe. It also suggests that university and publisher systems should be developed to accommodate ORCID  (for author IDs) and FundRef (for funder information), which will help monitor implementation of the policy in future years.

Table 7 presents some really interesting data on the mean costs of Article Processing Charges (APCs).

OA journals published by non-subscription publishers

Full OA journals published by subscription publishers

Hybrid journals published by subscription publishers

5-year mean (2010-14)

Why are the costs in the final column for Hybrids so much bigger than the rest? It was beyond the remit of the review to investigate this in detail, but this question does need further study. RCUK derives its money from public funds and this is a question which the taxpayer would certainly have a right to understand in more detail.

While the Report contains much that is useful and thought-provoking, there are some big gaps that it should have covered. The Report consciously limits itself to the implementation of the RCUK policy, and does not look at the wider UK Open Access scene in detail. This is a mistake because the RCUK position would be more intelligible if such a wider comparison had taken place. The Report says that the RCUK policy position is broadly complementary to other UK OA policies. Any misunderstandings on this front may be due, it says, to poor communication of the policies. Really? Are there many universities who believe this? The new HEFCE policy for REF 2020 seems to me to be quite different from the RCUK policy, and it is the REF policy that is capturing university attention at the moment. It is only the REF policy which is insisting on ‘deposit on acceptance’. And it is the RCUK policy which encourages Gold OA publications and requires the use of a CC-BY licence. The REF policy is neutral, for example, as to the colour (Gold or Green) of the OA output. To say that the RCUK and REF policies are complementary defies logic. The RCUK Review panel needs to think this one through again.

The Report highlights the shortcomings of universities in gathering data for the review. It is right to do so. There needs to be more accurate reporting next time. In that respect, I would have expected the Review panel to draw up a template for reporting, addressing the issues it identified as weaknesses in the first set of reports. The Report recommends that a template be constructed, but why (when this is such an important issue) did it not draw up this template itself? Not good practice.

Finally, the Report cautiously advocates that RCUK look at the level of funding it gives to fund OA dissemination in future years. A welcome recommendation, but rather weak. Wellcome funds all OA outputs that emanate from its funded research. Why did the RCUK review not make a similar recommendation? As things stand, once RCUK funds are exhausted, universities either have to find monies for APCs themselves or advise the authors to publish their outputs as Green outputs. This is unsatisfactory and will lead to a fragmented publication framework for RCUK research which is in no-one’s interests.

To conclude: the independent Review panel which has produced the review of the implementation of the RCUK Open Access policy has only half done its job. It has produced a detailed analysis about implementation, which is useful. But, in walking away from broader policy issues, it leaves many questions unanswered which should have been tackled. Will future reviews take these issues forward? They should.

Sunday, March 22, 2015

Open Access and the Request Eprint Button: Q&A with Eloy Rodrigues

Contrary to what one might expect, not all the items in open access repositories are publicly available. Estimates of the percentage of the content in repositories that is not in fact open access tend to range from around 40% to 60%. This will include bibliographic records containing only metadata, plus full-text documents that have been placed on “dark deposit” — i.e. documents that are present in the repository but not freely available, either because they are subject to a publisher’s embargo or because the author(s) asked for the full-text to be deposited on a closed access basis. To enable researchers to nevertheless obtain copies of items that have been placed on dark deposit OA advocates developed the request eprint button. But how does the button work, and how effective is it? Below Eloy Rodrigues, Director of Documentation Services at the University of Minho, discusses the issues, and outlines the situation at UMinho.
Eloy Rodrigues

RP: How many scholarly items are currently deposited in the University of Minho’s institutional repository RepositóriUM, and what are the growth rates?

ER: Currently we have more than 32,600 items in RepositóriUM, with around 5,000 being deposited yearly since the upgrade of our policy (effective since January 2011). Since 2011 more than 20,000 items have been deposited.

RP: Of these, how many are full text and freely available to the public (i.e. they are not metadata alone, not currently subject to publisher embargo, and not restricted to members of the university — as in requiring login)?

ER: Almost 26,000 (25,932) are freely available, which is more than 79% of the total.

RP: As I understand it, repository users can ask that a private copy of any document on dark deposit is made available to them by using the request eprint button built into the repository. In 2010 you co-authored a paper about this button, which was then more frequently called the “Fair Dealing” button. Your paper included data on “approval success rates” (i.e. the frequency with which authors sanctioned a copy of their work being made available to those requesting it). These data came from three universities: Southampton, Stirling and UMinho (your institution). The approval success rates were, respectively, 47%, 60% and 27%, with many requests simply ignored or lost. How has the situation at the University of Minho changed since then? What are the current figures?

ER: The overall response rate has remained basically the same, or even a little lower. In 2014 we had a global response rate of around 23%, with 21% sending the requested documents and 2% denying the request.

However the global response rate is highly “biased” by the effect of theses and dissertations. Theses and dissertations (T&Ds) account for around 21% of the total number of documents in RepositóriUM, and around 30% of the total number of restricted or embargoed access documents (currently around 6,700), but I estimate (based on some small “samples”) they represent far more than 50% (probably around 60% to 70%) of the requests received.

Because most authors of T&Ds don’t maintain any connection with the university after completing their thesis and dissertation, and they often change the email that was registered at the time the document was deposited in the repository (which is the email used to send the requests to authors), the T&Ds response rate is very low (probably lower than 10%), and that obviously affects the global response rate.

But we really don’t have data on this (we would need to “manually” look into the request logs we have, as we are not registering the document type from the requests) but based on some anecdotal evidence I estimate the response rate from UMinho members (professors and researchers) will be at least two times higher than the global average. So, excluding T&Ds, I “guess” the current response rate will be around 50%, or even a little bit higher (from 50% to 60%).

Eprint fatigue

RP: In 2010 you made the following comment on a blog: “Our experience is that authors get ‘tired’ of replying to copy requests, especially when requests are very frequent. The consequence is that some start not replying at all, and others ask to change to open access articles/papers/theses there were in closed/embargoed access. We had more than 20 of those requests just on the last year…” Is that still your experience, or have author’s attitudes and behaviour changed since then?

ER: In the last couple of years I haven’t had regular conversations or feedback from Minho researchers about the copy requests, in the way I did in the first few years after the introduction of the button. But I know we still receive frequent (approximately on a weekly basis) requests to change the access status of closed/embargoed documents to open access.

RP: Presumably if a paper is on closed access as a result of a publisher embargo it is not possible to change the status to open access?

ER: Presumably yes. But there this a wide variety of behaviour from UMinho authors. While some are confident and fearless, others are fearful at the time of deposit, especially with papers published in journals or conference proceedings which do not have well formalised self-archiving/OA policies. Afterwards they tend to become less timid about their publications.

We inform authors about possible access permissions or restrictions to their deposited publications, but we respect their wishes about the access status.

RP: I assume most institutional repositories now have a request eprint button. But I think not all IRs implement the button in the same way. Can you talk me through the process at RepositóriUM once a user hits the eprint button? Is it fully automated, or is there some manual intervention? What happens behind the scenes when a user requests a copy of an item in the repository?

ER: The way we implement the process in RepositóriUM (and I assume it will be similar in other DSpace based repositories, as the request-copy addon to DSpace was developed here at UMinho) is the following: When users hit the button (actually it is a closed access logo) and fill in a form with their name and email (and an optional message), an automatic email is immediately sent to the author.

That message contains a token URL, directing the author to a RepositóriUM page, where there are two buttons – Send copy / Don’t send copy. After choosing one of the options another page is displayed with a template message, which can be edited by the replier. The final step is hitting the send button.

So, in summary, the text is always provided by the author (and not automatically or by the repository staff), and the process requires just 3 clicks, plus editing the reply message if the author chooses to do so.

RP: Advocates for use of the button believe that it is a much more effective way for researchers to get access to papers on dark deposit than, say, by directly emailing the authors. I note a paper published in PLOS ONE in 2011 tested the email approach. A group of researchers sent out a number of email requests for papers in the area of HIV vaccine research. The success rates they reported were between 54% and 60%, which is perhaps a little higher than the rates described in your 2010 paper. What do we make of that?

ER: I can only speculate about it. The button simplifies the process, both for the requester (who only needs to make two clicks and, if they want, customise a model message to the author) and for the author (who receives an email from the repository and just needs to make three clicks, and if they want customise a reply message). But maybe, at least for some people, this may appear completely impersonal and they prefer the more personal and human direct email contact.

That said, I’m not convinced that email contact will get a higher response rate than the button, and you cannot infer that from the PLOS paper. To test that hypothesis you would need to test both approaches for the same universe of publications and authors.

RP: The PLOS ONE study reported that two thirds of the papers (where the author responded positively) were received “on the same day or the next. However, the other third of respondents took on average 11 days to reply (median 3 days, maximum 54 days).” Do you have any information on turnaround time for those who use the button at UMinho?

ER: We just have data on the mean response time. In 2014 the mean response time was near six days for accepted requests, and 3.5 days for rejected requests. Again I think this result may be slightly biased by a higher response time from T&Ds authors, but that would need to be investigated.

User friendly?

RP: On March 2nd I tried to access a paper in RepositóriUM called “Academic job satisfaction and motivation: findings from a nationwide study in Portuguese higher education”. On trying to open the paper I was told that it was on restricted access and invited to request a copy of it, which I did. As the image below shows, I was informed that my request had been successful. However, I never heard anything further, and was left in the dark as to what had happened to my request. It is not a very user-friendly system is it? Might not most readers be inclined to give up after even a couple of such failed attempts to get a paper?

ER: Yes, I recognise that. It is not very user friendly, and people may be inclined to give up after a couple of “non-answers”. We’ve focused the development of the addon on making it very easy and simple to use by external readers and especially by UMinho authors.

At the time of development we really didn’t consider the issues around monitoring, reporting, collecting statistics on the use of the button, or providing feedback to requesters. And after the initial development we have really just made some minor improvements/adjustments (like spam control through a captcha feature) and upgraded it to the newest DSpace releases.

RP: My experience with the ORBi repository at the University of Liège was somewhat different. I tried the button there twice. On both occasions I received the full text (or a link to it) within 24 hours. Paul Thirion, Head librarian at the University of Liège, reports that the approval success rates for requests made using the button built into the ORBi repository are higher than average, ranging from 67% in 2009 to 81% in 2014. Do you have any sense of why Liège is more successful at getting researchers to approve eprint requests than other universities?

ER: I really don’t know. I imagine that, apart from some subjective aspects (like cultural and organisational differences and/or a different relationship to and perception of open access and the institutional repository between researchers at Liège and Minho etc.), there are some objective factors to explain it: probably the T&Ds effect is not present at ORBi, and I can speculate that there is a difference in the percentage of closed/embargoed access documents in ORBi (which I think is higher than in RepositóriUM), and maybe there is also a lower percentage of documents for which the access status is changed to open after deposition. [RP: Paul Thirion reports that around 62% of the documents in ORBi are full-text].

To what end?

RP: The paper you co-authored in 2010 goes on to say, “Given a significant number of button requests which are ignored or lost, one might be tempted to assume that it has not worked. However, this is not true. The principal impact of the Button has been to enable the adoption of institutional IDOA mandates.” This left me wondering as to the point of the button. I had assumed the sole purpose was to ensure that those who want access to papers under publisher embargo can nevertheless obtain a copy of them. For instance, in commenting on the open access policy being introduced by the Higher Education Funding Council for England Stevan Harnad described the purpose of the button as being to “tide over the usage needs of UK and worldwide researchers for the deposited research during the allowable embargo.” Your paper, however, suggests that the objective is rather to encourage funders and institutions to introduce OA mandates. What are your views today on the purpose of the button?

ER: I think the introduction of the button had both the immediate and practical objective of providing access to papers which were deposited with temporary (embargo period) or definitive access restriction, and the more strategic objective of helping in the introduction of mandates (by creating a mechanism that allows mandating universal deposit, regardless of eventual access restrictions, while offering a “second class” access procedure).

In my opinion both purposes remain important today.

RP: How would you describe the success of the button today, and what do you predict for its future success?

ER: I don’t know what the global response rate to the button requests is.  But even if it is closer to the UMinho 50% estimate, than the Liege 80% result, it means that tens or hundreds of thousands of papers were made available to many readers that otherwise would not have access to them.

So, I think the button is relatively successful, both in actually providing access to closed/embargoed access publications and in helping institutions and funders to define self-archiving mandates, without pushing themselves into spending yet more money by paying APCs, on top of their subscription costs.

For the immediate future, I predict the button will remain useful and hopefully more successful, as the number of mandatory polices, as well as embargoes, grows.

RP: One thing I find striking is that advocates for the button seem to have done very little research into its efficacy. Why do you think that is?

ER: I can only reply for myself and for UMinho’s RepositóriUM. I think the first reason is that our main focus is on managing and running the repository as a critical service of the university, with limited capacity to do research and development. So we use that limited capacity for very practical and applied developments and not on “non-applied research”.

The second reason is that, despite being important and useful, the button is not on our top three priorities for work on the repository. We’ve devoted much of our efforts on improving the repository interoperability and integration with other services/systems, on facilitating and simplifying the deposit/self-archiving of publications into the repository, on collecting and providing usage statistics to authors of publications in the repository, on guaranteeing/improving repository visibility in the global search engines (especially Google), etc. All those issues have higher strategic relevance for us given the current state of policy implementation and repository development at UMinho.

RP: Do you think there is a danger that if the button were to prove too successful publishers might seek to curtail or prevent its use in some way?

ER: I don’t think so. It is at least very questionable that publishers would have any solid legal ground to act against the button use, and, on the other hand, it would give them very bad publicity. So, from a cost-benefit point of view, I think the button is not a high priority for publishers either.

RP: Thank you for taking the time to answer my questions.

I am currently working on a longer document about dark deposit and the request eprint button. As such, I would welcome people’s thoughts about and experiences of these two things. I can be contacted here.

Sunday, March 08, 2015

The OA Interviews: Alison Mudditt, Director, University of California Press

As the open access train rolls towards the future more and more traditional scholarly publishers are jumping on board. When and how they do so is not an easy decision—as Wiley’s Alice Meadows pointed out recently on the Scholarly Kitchen. Nevertheless, OA is now inevitable, so the plunge has to be taken sooner or later.

The University of California Press made its move in January, launching two new open access programmes—Collabra and Luminos.
Alison Mudditt

Collabra is a mega journal that will initially focus on three broad disciplinary areas (life and biomedical sciences, ecology and environmental science, and social and behavioural sciences), and then expand into other disciplines at a later date. Collabra is expected to publish its first articles in the next month or so.

Luminos is an open access monograph publisher that will publish its first book this autumn.

What is the context in which UC Press’ move needs to be seen?

The key challenge open access poses for publishers is how to develop a workable business model. After all, since OA requires that research publications are made freely available, the traditional subscription model no longer works. Understandably, therefore, publishers have concluded that the costs of producing OA journals and books will have to be recovered at the author’s side of the process (via author-side fees) rather than at the reader’s side (via subscriptions). 

The question therefore is: how can this be done in a way that it is both workable and sustainable? Today there are two primary ways of attempting to do this—the article-processing charge (APC) and the membership scheme.

In the former case, the onus for finding the funds needed to pay to publish falls on authors. This means that if they cannot persuade their institution or funder (assuming they have one) to pay the bill, they may have to pay it themselves. (Most OA publishers advertise fee waivers, but it is not entirely clear how many researchers benefit from these, especially those offered by commercial publishers).

In the latter case, the author’s institution takes on the responsibility—by bulk-buying APCs (publication rights if you like) for all its researchers. Normally, this means the institutional library will pay subscription-like annual fees to a number of open access publishers. For authors this has the benefit of making OA publication services free at the point of use, although there are variations on this model—e.g. here and here

And as large subscription publishers like Springer ramp up their open access activities we are seeing new-style big deals emerge whereby libraries pay a single annual fee that covers both access to the publisher’s paywalled content and publishing rights for researchers who want to publish in their open access journals.

These new models have their critics, and OA advocates frequently point out that the majority of OA journals today do not charge a publication fee. The implication is that there are other, better, ways of funding open access. Nevertheless, as large commercial subscription publishers increasingly move into the open access space (offering OA journals and, increasingly, OA books), the tide is currently moving strongly in the direction of author-side pay-to-publish models. 

Today, therefore, unless their institution has a membership scheme with the OA journal  in which they want to publish, authors looking to embrace OA still face the challenge of finding some way of paying the publication fee. This can be very difficult, particularly for researchers who have little or no funding (as UCLA behavioural and evolutionary ecologist Peter Nonacs describes here).

Those who work in subjects where the monograph is the primary vehicle for communicating research find themselves in a particularly hard place. Consider, for instance, that where a commercial publisher like Springer charges $3,000 to make an article open access (and non-profit OA publisher PLOS charges between $1,350 and $2,900) the cost of publishing an OA book can be as much as $17,500 + taxes (which is what Palgrave Macmillan charges). Clearly, this poses a huge challenge.

In the hope of addressing this issue Knowledge Unlatched—a not-for-profit organisation coordinating a global consortium of libraries to share the costs of making books open access—has pioneered a library consortium approach. 

The model used here is not unlike the membership schemes used by OA journal publishers, but what libraries pay depends not on the number of texts their researchers publish, but on how many other libraries join the consortium. Basically, publication costs are shared between institutions on a per title basis. Knowledge Unlatched estimates these costs at around $13 to $60 per library, per book. Clearly, time will tell how successful this approach proves.

Variations on a theme

So what is UC Press bringing to the party? Essentially, while embracing the two primary author-side payment models, the Press has introduced some interesting innovations. Let’s describe its approach therefore as variations on a theme.

The first point to make is that as a non-profit publisher subsidised by its host university, and with its own foundation, UC Press has been able to set Collabra’s APC at $875. This is not only significantly lower than what commercial publishers charge, but considerably lower than PLOS ONE, the pioneering mega journal launched by non-profit publisher Public Library of Science in 2006. PLOS ONE charges $1,350 per paper.

Moreover, only $625 of this fee will go to Collabra, with $250 being pooled in what the publisher calls a “Research Community Fund”. This fund is then used to pay editors and reviewers a fee for their services. Explaining how it works to Scholastica, UC Press’ director of digital development Neil Christensen said, “[O]n a quarterly basis we look at activities: Reviewer A had X many decisions, Editor A had X many decisions, and for each decision there is a point value. You take the total sum of the money in the pool and then divide it by the total sum of the points that have been generated for that period, and then allocate the money based on how many points or value each individual has contributed.”

It is this novel feature that has attracted most attention for Collabra (see here and here for instance). But in fact the more interesting aspect of Collabra’s model is that editors and reviewers are invited not to take the money they have earned, but to give it away—either by donating it to the Collabra Waiver Fund, or to their own institutional open access fund. By doing so, they can help researchers who do not have the money needed to publish make their work open access too.

What this does is draw out attention to the fact that scholarly publishing is essentially a communal and collaborative activity, and one that works best when scientists and scholars are able to share their findings in as frictionless a way as possible. While the Internet has made it technically much easier to share research, current models of open access have made it financially harder (since authors now need money to pay to publish). As noted, this is especially difficult for those in subjects with little in the way of funding. With Collabra, UC Press is proposing that a possible way of mitigating this new obstacle is to invite researchers to share the costs of open access publishing amongst themselves in an equitable way.

And with this same aim in mind, Collabra plans to “pair” different research fields. As Mudditt explains below, “One of Collabra’s core innovations is to test the thesis that we can use income from fields with higher research funding to support those with little or no funding. As such, this requires us to publish both in fields that have substantial funding (such as the life sciences) and those that have far less (in this case, social and behavioural sciences).”

The same community-focussed approach is also inherent to the Luminos model. While its publication fee ($15,000) is comparable to that charged by other publishers, UC Press will subsidise the fee through a library membership scheme (research libraries are being asked to pay an annual fee of $1,000 in order to “directly support researchers in getting vital work into the world” and to “help ensure access to this work is open and free to everyone”). The publishing costs will also be directly subsidised by UC Press. As a result, it is expected that the cost to the author will be halved to around $7,500. UC Press assumes that in most cases the author’s institution will pay the subsidised fee, but it has also created a Luminos fee waiver fund for those unable to obtain institutional support.

And in a similar collaborative spirit, UC Press is working with the California Digital Library (courtesy of a $750,000 grant from the Andrew W. Mellon Foundation) to develop a web-based open-source content management system to support the publication of open access monographs in the humanities and social sciences. When complete, the system will be made available to the wider community of academic publishers, especially university presses and library publishers.

So far as licensing goes, UC Press has decided to directly emulate what other OA publishers are doing. All the papers published by Collabra, for instance, will be licensed under a CC-BY licence—as they are with PLOS eLife, PeerJ, and F1000Research. And authors publishing with Luminos will be able to choose from a range of Creative Commons licences, as they can with Knowledge Unlatched. When asked on the Scholarly Kitchen blog about the latter decision, Mudditt explained that research undertaken by UC Press (and by Knowledge Unlatched) had “unearthed significant concerns from authors about losing control of their material.”

In summary, while UC Press’ OA programmes could be described as variations on a theme, they come with some interesting innovations. These innovations remind us that scholarly communication works best when it experiences as little friction (both technical and financial) as possible. They also remind us that communicating research is essentially a communal and collaborative process. And since for some authors open access introduces financial obstacles that did not previously exist, it follows that the research community needs to come up with new non-discriminatory ways of sharing the costs of scholarly communication.

It is also possible that today’s author-side pay-to-publish OA models may not prove workable in the long term. The OA membership schemes being introduced by large journal publishers, for instance, seem destined to recreate the dysfunctional market conditions that subscription publishers are accused of creating with the big deal. As such, it is not currently clear that open access will solve the affordability problem that caused many to join the OA movement in the first place.

But the good news is that if publishers like UC Press continue to experiment, and to innovate, both the accessibility and the affordability problems may eventually be solved.

To find out more about UC Press’ open access plans please read Mudditt’s answers to my questions below.

Wednesday, February 18, 2015

Open Access and the Research Excellence Framework: Strange bedfellows yoked together by HEFCE

When the Higher Education Funding Council for England (HEFCE) announced its open access policy last March the news was greeted with great enthusiasm by OA advocates, who view it as a “game changer” that will ensure all UK research becomes freely available on the Internet. They were especially happy that HEFCE has opted for a green OA policy, believing that this will provide an essential green component to the UK’s “otherwise one-sided gold OA policy”. The HEFCE policy will come into effect on 1st April 2016, but how successful can we expect it to be, and what are the implications of linking open access to the much criticised Research Excellence Framework (REF) in the way HEFCE has done? These are, after all, strange bedfellows. Might there be better ways of ensuring that research is made open access?
Yoked together
What OA advocates particularly like about the HEFCE policy is that in order to comply researchers will not have to find the money needed to pay to publish in gold OA journals (as they are asked to do with the OA policy introduced by Research Councils UK in 2013). Rather, the HEFE policy states that only those papers that have been deposited in an open repository (on acceptance) can be submitted to REF2020, and that it is agnostic on whether researchers opt for green or gold.

HEFCE assumes that since no UK academic will want to risk not being submitted to the REF, they will ensure that copies of all their peer-reviewed papers and conference proceedings are made freely available on the Internet, regardless of whether they publish in OA or subscription journals. Not being submitted to the REF can have serious consequences for a researcher’s career.

Will HEFCE’s assumption prove right? At the time it announced its policy the funder cited some research implying that compliance levels will be very high. As it put it, “Our analysis of a sample of journal articles and conference proceedings submitted to the current REF shows that authors could have achieved 96 per cent compliance with the access requirements in this policy, had the policy been in place for REF2014. The remaining 4 per cent of outputs would have remained eligible for submission to the REF as exceptions.”

Does this mean that we can anticipate that 96% of journal articles and conference papers produced by UK researchers will become freely available on the Internet? I explore this and other issues in the PDF file linked below.

Some of the points I make are as follows:

·         There are a number of reasons to believe that the HEFCE policy will not make as much UK research freely available as OA advocates anticipate, not least because the number of researchers submitted to the REF is surprisingly low. In addition, the excessively punitive nature of the REF may be likely to alienate researchers from open access more than endear them to it.

·         By tying open access compliance to the REF, HEFCE has opened the door for university administrators to appropriate OA for their own ends. As such, the HEFCE policy can be expected to increase the bureaucratic scrutiny that UK researchers are subjected to, and encourage ever greater micromanagement. This is likely to further alienate researchers from open access.

·         Between them the RCUK and HEFCE policies look set to be extremely costly to manage and police. This will inevitably see money that would otherwise be used to do research and hire new researchers siphoned away to pay administrators, and to cover management overheads.

·         As things stand, historians of the open access movement may be inclined to conclude that UK OA advocates made a strategic error in seeking to co-opt government to their cause, overlooking the fact that government has its own agenda, and so would inevitably seek to capture and mould open access to fit that agenda.

·         Specifically, the HEFCE policy needs to be seen in the context of the UK government’s neoliberal agenda, an agenda that has become increasingly focused on commodifying higher education, and now seems intent on encouraging excessive commodification of the research produced in universities as well.

·         Meanwhile gold open access is being appropriated by publishers, with the apparent blessing of the UK government. As a result, publishers are migrating their journals to an open access environment on their own terms, and in a way that locks their current profit levels into the OA environment, even though those profits are universally held to be unacceptably high.

·         OA advocates have always argued that open access is inevitable and optimal. If that is right, then the issue is not whether open access will become a reality, but how and when it will. So the key question is this: how does one create a culture in which openness is viewed as the norm? Is it better to try and win hearts and minds by engaging people in a debate about open access, telling them about the benefits, and creating incentives to encourage them to embrace it? Or is it better to try and force them to embrace it by tying it to punitive regimes that end up excluding the majority, and micro-managing everyone to a standstill.

·         Green OA advocates insist that compulsory policies are essential, since they are the only way of getting OA repositories filled. As such, the HEFCE policy is modelled on the much-celebrated OA policy introduced in 2007 at the University of Liège. This was the first policy to make deposit in an institutional repository a requirement for researcher evaluation.But was it the right model for a UK funder like HEFCE?

·         An important issue with the HEFCE policy is that the principles inherent to the OA movement are those of sharing and egalitarianism. By contrast the REF is built on the principles of exclusion, elitism and punishment. These are strange bedfellows, and we need to wonder how the elitism of the REF can be viewed as compatible with the idealism of open access.

·         Is compulsion really essential? There is, after all, an alternative green OA model — the so-called Harvard model. This is a voluntary approach. It is worth noting that although Harvard’s repository (DASH) does not currently boast as many deposits as the University of Liège’s ORBi repository, it is nevertheless growing at an exponential rate, and it experienced twice as many downloads as ORBi last year. Is not the ultimate test of a successful repository the number of downloads, not the number of uploads?

·         OA advocates would rightly argue that there is a limit to what a comparison of just two OA repositories can tell us. After all, they might say, there is no shortage of universities with weak OA policies and empty repositories. While this is true, it points to the fact that open access advocates in those institutions have failed to make the case for OA to their peers. It is for this reason that they have turned to funders and governments to force OA on their colleagues. This could turn out to be a dangerous game to play.

·         Open access advocates can rightly boast today that they are persuading more and more funders and governments to force their peers to embrace OA. But this is not so much a victory for advocacy as a victory for top-down compulsion, and in many cases it is likely to lead to a further erosion of researchers’ rights.

To read the full document please hit the link here [29 page pdf].

Monday, December 15, 2014

The Open Access Interviews: Dr Indrajit Banerjee, Director of UNESCO’s Knowledge Societies Division

The mission of UNESCO, which was founded in 1945, is to “contribute to the building of peace, the eradication of poverty, sustainable development and intercultural dialogue through education, the sciences, culture, communication and information.”
Indrajit Banerjee
An important plank in that mission is a commitment to help build inclusive and equitable knowledge societies. We should not be surprised, therefore, that UNESCO supports the Open Access movement, we should not be surprised that it was the first UN agency to adopt an OA policy, and we should not be surprised that it now makes its own publications Open Access.

Today UNESCO’s OA repository (OAR) provides free access to over 500 of its own books, reports and articles in over 11 languages, and in recent years it has created a number of OA portals, directories, knowledge banks and Open Access indicators.

In actual fact, argues Indrajit Banerjee, a commitment to both openness and to science has been implicit in everything UNESCO has done since it was founded in 1945. Immediately after the Second World War, for instance, it was one of the chief architects of the portion of the 1948 Universal Declaration of Human Rights aimed at safeguarding the rights of researchers. Specifically, Article 27 of that declaration asserts that everyone has the right to freely share scientific advancement and its benefits.

Subsequently, in 1974, UNESCO proposed a set of special recommendations concerning the status of science researchers; and in 1999 it organised a World Conference where a declaration on science and the use of scientific knowledge was agreed.

UNESCO’s advocacy for Open Access as such began shortly after the 2003 Budapest Open Access Initiative (BOAI), where the term Open Access was first adopted, and a definition of OA agreed. That year UNESCO had its first high-level success in OA advocacy, when it successfully lobbied for universal access to scientific information and knowledge to be included as one of the Action lines (C3) of the World Summit on Information Society (WSIS) process.

In 2009, UNESCO was requested by its member states to draw up a strategy for Open Access, a strategy approved at UNESCO’s 187th session in 2011. This contains a set of short, medium and long-term action plans (to be achieved within set time frames) to assist governments strengthen the processes for granting irrevocable rights of access to copy, use, distribute, transmit and make derivate works of research outputs in any format, within certain constraints.

The strategy also stresses that UNESCO should place particular emphasis on making publicly-funded scientific information (journal articles, conference papers and datasets of various kinds) freely available.

As a global organisation with 195 member states and 9 associate member states, much of UNESCO’s work takes place at the level of national governments and regions. To that end it regularly convenes high-level meetings in order to educate national governments about the benefits of OA. It also commissions research, reports, and guides on OA (often in partnership with other large organisations like the EU).

Given its broad mission, UNESCO views Open Access not as an end in itself, but as one of a number of important tools that can help achieve its wider objective. The toolkit includes other free and open approaches like Open Data, Open Educational Resources, and Free and Open Source Software, plus tools designed to facilitate and encourage sharing such as Creative Commons licences.

Above all, UNESCO believes that the success of OA depends on effective capacity building. In the context of OA this implies facilitating “a set of activities to improve awareness, knowledge, skills and processes relevant to the design, development and maintenance of institutional and operational infrastructures and other processes for implementing Open Access”.

And with its focus on creating inclusive and equitable knowledge societies, UNESCO approaches Open Access from the perspective of human rights and the eradication of poverty, and sees ICTs playing a vital role in achieving its objectives in these areas. Its two global priorities currently are Africa and gender equality. As such, it is determined to ensure that Open Access is implemented in ways likely to help, rather than further marginalise, developing nations, and in a gender neutral way.

In light of all this, as the UN’s Millennium Development Goals give way to the Sustainable Development Goals, UNESCO is keen to embed Open Access into the new goals, viewing OA as a vital tool for achieving them.

Given its international perspective, and its authority, UNESCO also believes that it is ideally suited to oversee a global debate on Open Access, a debate that — in light of the growing danger that Open Access could end up excluding rather than including the developing world — is now pressing. To this end, UNESCO hopes to organise the first international congress on OA.

To get a better sense of UNESCO’s interest in, and work on, OA, and what it feels to be the key issues going forward, I sent seven questions to the director of UNESCO’s Knowledge Societies Division Indrajit Banerjee. The answers turned out to be admirably comprehensive, so I list a few choice quotes from Banerjee’s answers below. I urge everyone to read the full text.


·         The primary reason for UNESCO to be involved in Open Access stems from the fact that the organization believes in “Maintaining, increasing and diffusing knowledge by encouraging cooperation among the nations in all branches of intellectual activities”.

·         UNESCO’s role in the global Open Access movement is to foster OA at the highest possible level by continuing to build on the pillar of universal access to information and knowledge to empower local communities by bringing experts together and utilizing its global network of regional and field offices, Institutes and Centres.

·         Guided by the organization’s founding principle that universal access to information is the key to building peace, sustainable economic development and intercultural dialogue, UNESCO must continue to raise awareness, formulate policies and build capacities to promote Openness in content, technology and processes, with particular emphasis on scientific information.

·         In an era where the World Wide Web plays an increasingly vital role in the intellectual development of societies, information digitization has revolutionized the means by which we share knowledge. As the ‘intellectual’ agency of the United Nations, UNESCO has a central and critical role in encouraging the universal sharing of all forms of knowledge in real time to build inclusive Knowledge Societies. This may be through the classical form of dissemination, but more importantly by supporting the Open Access movement enabled through the power of the Internet.


·         We understand that OA publications are underrated because there is a lack of a policy that fully respects the effort behind the publications. There is a serious concern about peer review processes employed by OA journals.

·         There is an increasing concern that although the OA mode of research publication is becoming increasingly popular, it has not positively impacted the ability of researchers from developing countries to publish their research works.

·         The policy issues surrounding OA, adoption of policies (and/or mandates), implementation of policies (and/or mandates), monitoring and evaluation of these policies (and/or mandates) still need to be improved for most countries.

·         Furthermore, in the countries which have formulated and established OA policies/mandates, they have not been able to produce any solid evidence that OA is indeed having a positive impact on knowledge production and dissemination in the country. As the contribution of Open Access to the cost of research saved and the amount of knowledge gained are still not properly evaluated, the condition of “lead-by-example” is lacking.

·         We have also noted that within countries, those who can make a difference still lack a good understanding of OA and therefore do not fully support the OA movement, for fear of job loss and negative impact on its publishing industry.

·         Development, sophistication or understanding of OA is not evenly distributed, by geography or by subject. There is a strong need for the cross-fertilization of ideas and conditions for synergy to be properly discussed and explored in their entirety.

·         As the Global South catches up with the North in terms of scientific output, for instance, it allows for greater innovation in OA, and provides opportunities for developed countries to adopt some of the less costly OA methods that have emerged in developing countries. So, for instance, innovation in Latin America is enabling a lower APC cost base. New models like this could benefit the North.

·         At the same time, innovative methods from the North are being implemented in some developing countries. This cross-fertilisation could be very productive and so we are documenting the processes involved.


·         OA is central to UNESCO’s activities in the future. It is part of our Open Solutions programme and we are convinced that Open Access should be an integral agenda in any effort to create Knowledge Societies.

·         UNESCO must mobilize stakeholders to organize regional consultations and explore the possibility of organizing the first international congress on Open Access to scientific information and research. This international congress should analyse the existing national and international legal framework concerning Open Access and examine the necessity for the elaboration of a new international instrument.

·         UNESCO must also play a role in combining the context of Open Access within the broader understanding of Openness and link it with Open Educational resources (OER); Open Training Platform (OTP) and Free and Open Source Software (FOSS).

·         UNESCO is also concerned about the role that Open Access can play in realizing Post-2015 Development Goals. Dedicated research is currently on going to identify the potential of Open Access within the broader context of SDGs.

·         As a specialized agency of the UN system, UNESCO is playing its part in analyzing the concern about poverty (and other human challenges) and is committed to making Open Access one of the central supporting agendas to achieve the SDGs.

·         Out of 17 goals proposed for the next SDGs, at least 10 goals need constant research inputs. Given that these goals must be achieved globally, there is an absolute need for any restriction to disseminate research outputs to be comprehensively addressed. So in the next 15 years, OA to research will play a fundamental role in supporting efforts to achieve these goals.

·         UNESCO is working with its partners to provide a closer look at the Impact Factor. While the existing bibliometric, scientometric and altmetric approaches are robust, their upstream usage has remained very limited.

·         The extent to which the Knowledge Divide is narrowed, and to which we are able to create societies that are truly Knowledge Societies, will determine the pace of development. OA has the potential to lessen the existing knowledge divide. This gap goes beyond the rifts in mere access to ICT. It refers to the gaps that exist across all the four building blocks of Knowledge Societies, namely: Knowledge Creation; Knowledge Preservation; Knowledge Dissemination; and Use of Knowledge.

·         Opening access to knowledge is thus a fundamental part of the approach that can open and address the many jagged facets of Sustainable Development. OA uses ICTs to increase and enhance dissemination of scholarship. Sustainable Development and the creation of Knowledge Societies therefore are two sides of the same coin. 

·         The theme of inclusive Knowledge Societies continues to be at the heart of UNESCO’s work to fulfil the WSIS objectives. Inclusive Knowledge Societies are societies in which people have ready access to information and communications resources, in languages and formats that suit them, and the skills to interpret and make use of them. The Organization’s future work will thus be to establish the context of OA within the broader framework of inclusive Knowledge Societies. UNESCO will continue to pursue this objective vigorously through its own programmes on OA as well as in partnership with other organizations and UN agencies.

The interview with Dr Indrajit Banerjee is available as a pdf file, and can be accessed HERE

Please note that the text in the pdf file is licensed under CC-BY.