Archive for the ‘useCases’ tag
Once Marjolein Hoekstra pleaded Feedly to consider adding support for dynamic OPML. Her post summarised very well why OPML is important for RSS aggregators to dynamically detect any change in the list of subscribed feeds. When an OPML would get updated, so would the aggregator. The scenario is valid for publishers of scholarly journals.
JournalTOCs endeavours to keep up-to-date the journals published by all the publishers that have been selected by JournalTOCs. Unfortunately publishers that don’t have OPML files listing their current journals are not able to prevent information on their journals from growing stale at the JournalTOCs database.
Producing an OPML file is simple and it’s not a challenge for web developers. Here, we are glad to post a good example of an OPML implementation done by an important publisher.
Laura Paterson is the Program Administrator in charge of Marketing Online Data Focus at Annual Reviews. Back in September 24th, 2010; when reading the UKSG e-resources mailing list, she noticed that several of Annual Reviews‘ newer journals were not listed on JournalTOCs website. Laura emailed JournalTOCs to enquiry about the process for adding additional titles to JournalTOCs. In her first email, Laura listed the eight journals that were missing in JournalTOCs, she attached a KBART formatted list of all the titles published by Annual Reviews, which included ISSN and URL information for each title, and finally she offered help to get all the Annual Reviews titles in JournalTOCs.
We immediately added the missing titles and replied Laura encouraging her to arrange the publication of an OPML file on the Annual Reviews website.
A month later, Laura informed us that Annual Reviews have created an OPML file for its journals which can be found here.
What is more, Annual Reviews had also created a whole range of different and useful RSS feeds; from Table of Contents RSS Feeds for each of their journals to Annual Reviews Audio Series RSS Feeds. They have created a webpage that lists all of the RSS feeds that are available from Annual Reviews: https://www.annualreviews.org/page/about/rssfeeds
We were impressed with the way Annual Reviews had implemented support for OPML and RSS. We think that the development achieved by Annual Reviews demonstrates that producing good and useful RSS & OPML files is not an expensive or complex task. The OPML file and the RSS feeds produced by Annual Reviews are good examples of using web feed formats to publish frequently updated journals in a standardised format for the benefit of readers and subscribers of scholarly journals. The OPML file is simple (see following XML code) but provides enough information to keep automated track of the published journals. Some would argue that the ISSN and the subject classification for each journal would be added. However, the file accomplishes well its main purpose, and that is much better than nothing.
It is significant to notice that being Laura an expert in the field of online discoverability, she was able to quickly perceive the importance of using RSS and OPML for Annual Reviews business. All the major publishers are already publishing OPML files on their websites and the message for the rest of publishers is simple: having an OPML file on your website is highly beneficial for your business.
Manually subscribing to each of the journal TOC RSS feeds produced by a publisher is a time-consuming task. Journal users have to periodically visit the publisher web pages to keep track of new journals, transfers or journals that have ceased to exist. Some publishers have tried to make easier these processes (for example “Forget Me Not” of Springer), but the fact is that no one of these mechanisms is as easy and convenient as listing all the journals in an OPML file. OPML enables users to find the up-to-date list of journal feeds from a single point. Through OPML, users will always be able to know the journals that the publisher is currently publishing and get the latest content for any journal with less effort than visiting the journal feeds repeatedly.
Measuring the usefulness and effectiveness of the API: A retrospective view of prototyping the use cases
The project identified two use cases in the context of helping Institutional Repository (IR) managers to ensure that their content is complete and up-to-date. The first Use Case tried to find an answer to the need for IR managers to gather articles for the IR as they are published. The second Use Case looked into the need for IR managers to be alerted when deposited “submitted” articles have been published in scholarly journals. The project developed and prototyped a lightweight RESTful API to solve or alleviate both cases, by making use of content that is already completely freely available, namely journal TOC RSS feeds.
The first Use Case was tested using information provided by the British Geological Survey repository NORA (NERC Open Research Archive) and by the University of Warwick repository WRAP (Warwick Research Archives Project). In the case of the WRAP repository only data from the Department of History was used. The methodology used for testing this use case was presented in the project workshop and made available in the JournalTOCs Workshop: Presentation 3 – Testing the First Use Case blog post. Basically the methodology involves using two kinds of searches. One “batch” search and one set of “search by keywords” (the keywords are terms extracted from the institution name). The batch process, which combines searches by author, institution and subject, needs to be configured in advance and run offline. The search by keywords is done online and doesn’t require any previous configuration. The analysis of the results show that only 28% of the articles were positive results (articles that were really authored by researches from the institution). On the other hand 52% of the results produced by the best combinations of terms used by the search by keyword approach were positive results (Interestingly, for the NORA case, it was noticed that the extra effort of running a batch process had only identified two more authors than the quick search by keyword).
From the results obtained for the first Use Case, we can consider that searching by keywords is the most suitable option, despite only producing 50% positive results on average. The “batch” search does not justify the invested cost needed to be done by the IR manager and the API developer. It requires doing a setup for each repository. This setup is time consuming for the IR manager because she needs to identify the authors and the subjects that are relevant to her IR. Some IR managers have manifested that they may not be even able to get a list of authors for their own institutions. However, the main reason why the “batch” approach and in general any search by author fails is that the API is unable to unambiguously identify authors and their affiliation from the TOC RSS feeds. This is a problem beyond JournalTOCs capabilities. Our project has only confirmed the emerging need for having a means for uniquely and reliably identifying authors. We believe that the correct identification of authors will enhance the effectiveness of our API and in general enable proper discovery and reusability of research output. It is encouraging to know that the extremely difficult task of correctly associating research output with their legitimate authors is being carried out by the Names Project at the national level. Based on these evidences it is not worth running a “batch” search based on authors’ names. (The problem could also be alleviated if the publishers would implement the ticTOCs recommendations and authors’ affiliations in their journal TOC RSS feeds.) The outputs obtained from this Use Case suggest that integrating the API results directly into the repository workflow will be not possible until the unambiguous identification of authors is happening. What the IR manager can do is to use the API to setup an RSS feed tailored for his institution and based on searching by keywords taken from his institution name. In this way the API would alert the IR manager when new articles including the name or similar names to his institution name are published online.
In the Second Use Case we aimed to alert IR managers when submitted articles had been published. (In this context a “submitted” article is an article that has been submitted to a scholarly journal and in some cases accepted by the peer-review process but not yet published). Using sources from Sherpa/RoMEO we created a local directory of 108 repositories, most of them from the UK, including details for their OAI servers and RSS feeds. Our first approach then was to setup a process to periodically collect and analyse the RSS feeds produced by the repositories. It quickly became evident for us that those RSS feeds were not suitable sources for our work. The problems found in these RSS feeds are discussed in detail in the ‘Do we need a “best practice” for generating RSS’s URLs for IR search results?’ blog post.
Our second approach to tackle the second Use Case was to use OAI-PMH to harvest the IR OAI servers and thus identify recently deposited articles from the repositories. The first harvesting uncovered interesting findings. First of all, the OAI repositories were not using a standard way to identify or categorise “submitted” articles, even among repositories using the same software platform. Therefore, there was no way to tell for sure whether an article was in fact a “submitted” one. Secondly, we ran a quick survey among 20 IR managers from a sample of harvested IRs. None of them were letting authors to deposit submitted articles directly to their repositories. Most of these managers were only taking published articles, making the distinction between submitted and published articles almost null. Having not succeeded with identifying “submitted” articles we decided to apply the look-up tool against each article found in the repository (this approach was only tested with two repositories and there is no evidence to suggest that it is an scalable solution, even when, at the present time, repositories have only a few thousands records). Two new obstacles were identified when doing the matching against the complete content of repositories that we harvested using OAI-PMH. The first one was the low number of positive results obtained by this method and the second one was the inability to identify for sure new records from the OAI servers. The two IR managers informed us that using only the title of the article to match harvested articles with the metadata collected from the RSS feeds were not giving enough positive results. Adding the keywords and the abstract and authors (if available) in the search query only increased the number of false positives. On the other hand, automatically identifying new records in an OAI repository was a challenge task due the inconsistencies made by the repositories when cataloguing the fields that were supposed to be used to identify new records and the dates when the updates have been done. In conclusion, the second Use Case produced relevant results only when the API was used by the IR manager to manually send search queries to the API and if these queries included specific keywords taken from the title of the article and the results were filtered by the journal title. In these cases there are high chances to obtain either positive results or null results (the number of negative results is always much smaller than the number of positive results). However, again the second Use Case has also highlighted the need for having access to rich metadata to uniquely and unambiguously identify authors.
In general the most pressing concerns of repository managers were to get content for their repositories in the first place and then to have high quality metadata. Even with the limitations mentioned in the previous paragraphs, the API has demonstrated to still be able to assist in both those aims, as expressed in the feedback sent to the project by the majority of IR managers that have tested the prototype. The users have also appreciated the ability of the API to process heterogeneous and incomplete metadata to produce reusable consistent and “clean” metadata on current publications.
Interestingly new use cases for the API were identified by the own users. In the following paragraphs, we will mention briefly some of these use cases or potential spin-offs.
1. Providing relevant metadata to Research Information (RI) systems. Representatives from ATIRA, a Danish software company that commercialise the PURE RI system, approached the project to request us to adjust some of the API’s calls to support two functionalities of PURE: (1) to automatically complete journal’s metadata when the user is cataloguing a new article with PURE and (2) to provide cataloguers with an additional or alternative source of bibliographic references, alongside other data sources such as Web Of Science, Scopus and Biomed Central.
2. Sherpa/RoMEO has interest in using the API to link journal titles and ISSNs to their publishers. Peter Millington, the SHERPA Technical Development Officer found that the data returned by the API was very useful and easy to use. However, he identified the following functionality issues (1) The API doesn’t return all the types of journal title query that RoMEO offers and needs (e.g. “contains”, “starts”, “exact phrase” queries) (2) There are some keywords that are ignored by the API to support queries made by IR managers but that are needed for RoMEO queries. The exclusion of some stop words such as “journal” is particularly unhelpful in this respect. (3) RoMEO has also requested us to implement a new call to support queries on publisher names and get back a list of their journals.
3. Expanding the “users” call to get back a list of articles per user. The API is able to perform searches by email address of a registered user and to return a list of journals that user has added to his MyTOCs folder. The call is being used by a large number of different types of users (e.g. librarians, students, researchers, etc.) Some of these users have requested us to expand the functionality of this call to provide users with the option to request for a list of articles in addition to the default option of returning a list of journals.
4. Using the API to provide library users with the capability of searching for the latest articles published in most of the journals for which the University has current subscriptions. That means that the user will always be able to access the full-text of the articles returned in the search results. This application was requested by the Institution leading the project, Heriot-Watt University. The API should be able to inter-operate with A-Z journal lists, link resolvers and off-campus access control mechanisms such as EZYproxy. In addition, users will be given the option to obtain their search results in RSS format. The library is keen to use the free service offered by the API because the library will not need to transfer its holding to any database external to the library or to modify their current database systems in order to use the API. Any UK University would benefit from the development of this API application. The only requirement is that the API is provided with restricted by enough HTTP access to the library database holding its current journal subscriptions.
5. Embedding search results in Current Awareness Subject based services. The “institution” call has also highlighted a new use case or area of application for the API. This application has already attracted a lot of attention from the community of students and academics in Engineering, Computing and Mathematics since TechXtra launched its new service TechJournalContents, which is fully based on the API. TechXtra is a free service providing access to research, learning and teaching resources in engineering, mathematics and computing. The brand new service TechJournalContents was well received by TechXtra users and has already been mentioned in more than 50 relevant blogs. We would like to enhance the API subject classification database to support other different subject-based services.
A final thought from the project is that each of the above use cases and in general any service based on reusing the journal TOC RSS feeds will greatly benefit from any effort that publishers could make to implement the ticTOCs Metadata Recommendations and the project recommendation outlined in the Author Affiliation blog post. Publishers need to realise that the required effort is very small compared to the benefits brought by reusable TOC RSS feeds, in particular for their own business and for the research community in general. The question on “convincing” publishers to produce valid, consistent and rich journal TOC RSS feeds is still unsolved.
The fourth presentation at the JournalTOCs workshop was given by Nick Sheppard, Repository Development Officer for Leeds Metropolitan University.
The presentation introduced the Bibliosight Project and is now available as a slide cast.
The third presentation at the JournalTOCs workshop was given by Santy Chumbe, the JournalTOCs Project manager, on behalf of Anne Dixon from the British Geological Survey who helped to test the first use case for the JournalTOCs project.
The presentation was entitled Testing the First Use Case and is now available as a slide cast.
The presentation was entitled Repositories and Alert Services and is now available as a slide cast.