JournalTOCs Blog

News and Opinions about current awareness on new research

Archive for the ‘User Support’ Category

Changes to our free service

without comments

In order to continue providing our users with the best service possible, JournalTOCs will be implementing some changes in May 2018 which will affect the users of our free accounts.

Firstly, the maximum number of journals that free accounts can follow will now be 15 journals. This is a continuation of the blog post we made two years ago which can be viewed here. The reasons we outlined previously still apply now.

Furthermore, the maximum numbers of articles returned by the Articles-Search API (e.g. for free accounts will be 25 articles per search. For anyone without an account, they will be able to continue using the API. However, their search results will be limited to 10 articles per search.

Finally, the Articles and the Journals APIs will only return results in XML (RSS) format.

These limits will not apply to Premium account holders who continue following up to 300 journals per user account, search for articles using the APIs without any limitations and receive search results either in XML (RSS) or JSON format. If you are interested in arranging a JournalTOCs Premium subscription for your institution, please get in touch with us at:

Written by Hayley Gibson

May 8th, 2018 at 10:59 am

The easy way: Dealing with large-scale move of RSS feeds from HTTP to HTTPS

without comments

wget for Maney journal RSS feeds

A month ago, the world suffered a global cyberattack named by the international press as the ‘biggest ransomware’ offensive in history. Although the attack used a technique known as phishing (hackers spread a “ransomware” called WannaCry tricking email users into opening attachments and releasing malware onto their system) companies and organisations implemented every security measure available to them. One of those most common measures implemented by many journal publishers was to switch every webpage from HTTP to HTTPS (secure protocol) in order to encrypt and transport their content safely over the net.

While using https for every webpage, including pages that do not contain sensitive information, could seem to be an exaggerated and disputable measure, it is one of the quickest and efficient ways to protect a website. However, this measure has produced an unintended effect in the case of the RSS feeds used by journals to announce their new content: As a result of all these URLs changing, people who have manually added the previous URLs to feed readers are finding that those feeds are now out of date and are not providing the latest Tables of Contents. Even in the popular RSS reader services such as Netvibes, the previous feed URLs are not working.

It is up to individuals if they wish to load RSS feeds into their own readers, but in doing so, if the URL changes, individuals will then need to manually update the feeds in question. The benefit of using an aggregation service such as JournalTOCs is that we constantly maintain our database of feeds to ensure that we link only to the latest ones and that the content displayed in JournalTOCs is up-to-date. In the past couple of weeks we have updated thousands of feeds, using manual and automated methods, and this work continues. In essence, JournalTOCs does the work so that you don’t have to.

Written by Santiago Chumbe

June 23rd, 2017 at 2:26 pm

JournalTOCs free accounts can follow up to 25 journals

without comments

At the moment we are dealing with a high volume of daily email alerts caused by the increasing number of free accounts. Some of those accounts are following 100s of journals. To protect the normal service of JournalTOCs, we are moving free accounts to a separated server and from next week, the maximum number of journals that a free account can follow will be limited to 25 journals. Users registered with the free service of JournalTOCs are advised to follow up to 25 journals only and remove the extra journals from their accounts. The new limit of 25 journals per account doesn’t apply to Premium users.

Written by Santiago Chumbe

December 7th, 2015 at 12:33 pm

Posted in User Support

Tagged with ,

At last we got “200 OK” from Atypon for Maney

without comments

wget for Maney journal RSS feeds

One year ago (21st March 2014 to be exact) we contacted Helen Duce, the Head of E-Publishing at Maney Publishing, because after Maney migrated to its new Atypon’s e-publishing platform (Literatum), JournalTOCs was unable to crawl the TOC RSS feeds of Maney’s journals.

JournalTOCS not only uses the effective and simple RSS feeds to get the latest articles from over 25,000 journals. It also uses a very basic version of the simple, but still effective, wget unix command:

wget -O newtocs.tmp "journal-RSS-feed-URL" 2>&1

That is it. A wget that has nothing to hide or try to use its rich options to force crawling.

As we can only communicate with the publishers, we couldn’t discuss the problem directly with Atypon. So, we contacted Maney many times. While Helen was very helpful, Atypon was telling Maney that everything was OK at their end, but we knew that we were being refused access to the RSS feeds.

Today, Helen gave us the good news that Maney have finally heard back from Atypon on this issue. It turns out that our IP range was blocked by Maney Online (Atypon) because of “abuse monitoring“, given that JournalsTOCs was crawling content (RSS feeds) which Atypon flagged up as abuse.

Fortunately the misunderstanding has been resolved. Atypon has noticed that crawling RSS feeds is not abuse. The very reason for having RSS feeds is to enable other services to crawl and reuse your feeds to facilitate the widest dissemination of your content, which at the end of the day will benefit your business because it would increase the number of visitors to your site.

We are glad to be able to access the RSS feeds of Maney again. We will restore the Maney journals that were selected by the JournalTOCs Index and start to update their TOCs. In the last year, usage (number of followers) for Maney’s journals have decreased at JournalTOCs, but we hope that once users see that Maney’s journals are being updated, they will start to follow Maney journals again.

Publishers that are changing platforms should make sure to check that their RSS feeds continue being accessible for aggregators and discovery services. By working together, publishers, discovery services, aggregators and e-publishing platforms, can create positive impact in facilitating the dissemination of research.

“the success of these systems [link resolvers and knowledgebases] and services is ultimately dependent upon the cooperation of the various players across the supply chain of electronic resource metadata”
(van Ballegooie, Marlene (2015) Knowledgebases: The Cornerstone of E-Resource Management and Access. Serials Review 40(4) pp. 259-266. DOI: 10.1080/00987913.2014.977127)

Written by Santiago Chumbe

March 13th, 2015 at 1:02 pm

Why publishers should never NOINDEX their RSS feeds

without comments


(Update: Three months after this blog post was published,  OA Publishing London removed the NOINDEX meta-tag from their RSS feeds. Now, all the journals currently being published online by OA Publishing London have been restored in JournalTOCs.)

Last week, JournalTOCs stopped indexing all of the 40 journals published by OA Publishing London because this publisher took the unusual and illogical measure of requesting aggregators not to index (aggregate) the RSS feeds for the current issues of its journals. Tables of Contents from the OA Publishing London journals will no longer be updated at JournalTOCs. Those who have been following any of the 40 journals will not be able to keep up with new issues.

Why would OA Publishing London want to stop aggregators and search engines from crawling and collecting its RSS feeds? Years ago, it might just have made some sense using the noindex meta-tag for RSS feeds, but nowadays there is no need to noindex such feeds. Google and the rest of modern search engines can easily identify RSS feeds and they act on that by not including RSS feeds in web search results.

Publishers should, in reality, very much want their RSS feeds to be indexed, because it can help aggregators and search engines to direct users to where the newest content is. Search engines are smart enough to understand the difference between a feed and webpage, and use the feed as a pointer to the webpage where the real source of the content resides. Allowing search engines to index RSS feeds is therefore an important way to drive traffic to the webpages of the actual content.

There is no scenario in which a publisher is not interested in having their latest content indexed. Old feeds generators, such as the deprecated Feedburner, still provide users with the outdated option to noindex feeds to prevent them from being penalized by search engines. Publishers need to be reassured that that it is no longer an issue, and indexed feeds do not create penalty situations. Google itself will normally not show RSS feeds in search results.

The noindex meta-tag is not good for publishers. Any publisher who wants to enable RSS readers, aggregators and APIs to reuse details of their content should make sure to remove the noindex meta-tag from their RSS pages and from their software that generates RSS feeds.

The noindex meta-tag to be removed looks like this:

<meta name=”robots” content=”noindex“>

This code tells search engines and aggregators that they should not index or crawl the content of the RSS feeds.

So, if you want the abstracts of your latest publications to be indexed by JournalTOCs, search engine, aggregator or any web service, and thus ensure that hundreds of thousands of potential readers can discover your content, you should make sure you ARE NOT using the noindex meta-tag.

The noindex meta-tag can help in search engine optimization (SOA) but it should be used wisely, rather than simply assuming that it’s always a good idea to use it. noindex should only be used for web pages you don’t want showing up in search results or want to hide from the external world. For example a test page, archive page, or something similar that is not relevant for the publisher’s business; these should have the noindex tag, so that they don’t end up taking the place of the real important pages in search results (Google’s algorithm tends to avoid placing multiple links from the same domain on the front page (unless the website has a good ranking)).

For optimal crawling, Google recommends using also RSS/Atom feeds

RSS pages (feeds) are not only relevant pages; they are used by the search engines and aggregators to redirect users to your relevant webpages! They help to market your real content. They are good for everyone, including readers, authors, end users and for your business.

Written by Santiago Chumbe

January 26th, 2015 at 5:07 pm