Posts Tagged ‘questions’

The University of Linking (part 2)

Posted on November 24th, 2011 by Paul Stainthorp

I’m determined there’s a better way of dealing with information about academic library opening hours than the mess of PDF documents and abuse of JavaScript we rely on at the moment.

Over coffee this morning, Mr Jackson and I drew a Linked Data graph (click for bigger):

Linked Data graph produced using LucidChart (http://www.lucidchart.com/)

What do you think about it? Is it detailed enough – or is it too pedantic? I suspect I’m not using a consistent level of abstraction across the whole graph. It was a first second attempt over a only a very small cup of coffee.

Some if not all of the terms in the graph will already have formal equivalents and HTTP URIs. Anbody care to suggest any? Some are probably already to be found at Chris Gutteridge‘s (and colleagues’) data.southampton.ac.uk – can we work the above up into actual real Linked Data triples using a standard notation?

Finally (so many questions): this still doesn’t quite solve the problem of a standard format for publishing the opening hours themselves. Could that be something as simple as a .csv file? (Easy to update by library staff.) Wouldn’t it be amazing if every academic library in the country published its opening hours (along with its geographical location and contact details) in such a format at a stable URL?

QR codes AWAY!

Posted on July 7th, 2011 by Paul Stainthorp

It was the annual University of Lincoln Library staff away day on Tuesday. I performed my turn: a 20-minute presentation on QR codes in academic libraries: the culmination of our (JB, PC, CL, MN, PS, EV) little internal mini-project. (There were two other mini research projects which reported on Tuesday: one group looked at improving the student experience; the other at the best ways of promoting new library resources.)

Then we broke off into groups to consider various questions arising out of the work of the project groups. My question was this:

How could we support and encourage the use of mobile devices in the Library?

We talked around this for a while: should we be supporting their use? (We certainly support and encourage the use of desktop PCs as tools for accessing library resources and services: so why not mobiles? Part of the problem, I think, is that we’ve not reconciled our historic library-y attitude to mobile phones with the possibilities of mobile computing. Whatever: we need to come to terms with them once and for all, decide on a position, and stick to it!)

Even given that we should be prepared to support mobile devices: do we need to encourage people to use them in the Library? (People seem to be adopting smartphones perfectly readily without the need for encouragement from libraries…) Perhaps what we need to encourage is not the use of mobile devices per se, but for students and academic staff to re-consider the use of them as valid devices for learning.

We also need to remember that ‘mobile devices’ ≠ just phones, but also mp3 players, tablets (e.g. iPads), e-book readers, netbooks, etc. etc.

After a while, we narrowed it down to six recommendations for the Library: three things we could do now, with no additional money, to support the use of mobile devices – three further things that we can plan to do in the future, which would require a bit of funding.

Do now with no extra money:

  1. Add QR codes to print journal box labels, to link our print holdings to the corresponding e-journal record (c.f. this photo);
  2. ‘Soft launch’ the mobile version of RefWorks (RefMobile) to our users;
  3. Ask colleagues within the Library who are already smartphone enthusiasts (they know who they are!) to demonstrate their toys to the rest of us.

Do in the future with a bit of funding:

  1. Run a marketing campaign to encourage people to re-consider their mobile phone as a useful academic tool (“the classroom in your pocket“?);
  2. Systems development – make sure as many of our systems as possible have a valid mobile user interface, and target development at those systems which are lagging behind;
  3. Purchase tablet devices for library staff to use when ‘roving': providing support to students away from the help desk (“…you don’t need to log in, I can show you on this!“).

If you don’t tweet, you don’t eat.

Posted on June 11th, 2011 by Paul Stainthorp

I’ve been told that asking questions is a good idea, so here goes:

Q. Why is the use of email compulsory for staff in universities, while Twitter/blogging is optional?

Managing e-journal holdings: different types of package: any tips?

Posted on June 9th, 2011 by Paul Stainthorp

The University of Lincoln Library provides access to lots and lots of electronic journals72,000-odd unique e-journal titles, at last count.

Some of these 72,000 titles are individual subscriptions – that is, journals that we pick off the shelf and pay for one-by-one – because they’re particularly appropriate to the teaching/research of the University. Many, many more of them are journals that come to us as part of a one-size-fits-all “Big Deal” database package, where we have little or no control over the titles on offer, but where there’s a critical mass of valuable content with makes it worth our while to subscribe to the whole thing. Yet more are freebie and/or Open Access titles available on the Internet which we list to make it easy for our users to find them.

In all, we maintain access to 73 separate e-journal packages (plus a handful of individual oddities that don’t form part of a package), and nearly 110,600 e-journal links (a fair number of titles are duplicated across packages).

Screenshot of the A-to-Z

To help us keep tabs on all this content, and to make sense of the many different e-journal access points on behalf of Library users, we make use of a nifty tool called the Electronic Journals A-to-Z, which is provided and maintained by a company called EBSCO Information Services. The A-to-Z consists of:

  • A hosted e-journal ‘knowledgebase': a directory of all the possible e-journals available, from which we can select those titles to which we have access;
  • A public, searchable journal listings site, with tools for customising the display of particular e-journals (or entire packages), including the holdings data (i.e. the start- and end-dates of full-text holdings) for each title;
  • An OpenURL link resolver, which we brand as – Find it @ Lincoln
  • Various admin services including usage reports.

Even with the tools that the A-to-Z provides, it’s still a lot of work to keep on top of so many e-journals from so many different sources. To help us (“us” being me and two colleagues from the E-resources and Acquisitions teams), we maintain an ERM spreadsheet in Google Docs: this contains details of all the acquisitions & technical information we need to manage each package in the list.

The packages fall into four distinct categories [below]; each category has to be maintained in a different way.

  1. Big Deal“-style databases, to which we subscribe in toto. These cause little or no bother. EBSCO do most of the work for us. Their A-to-Z knowledgebase contains details of all the titles in the database; EBSCO add new titles and remove old ones for us; we can be reasonably confident that their holdings data accurately reflect the database. The only real problems we have with these (and all) packages are around authentication – but that’s another story. This class of packages includes all the EBSCOhost databases (such as Academic Search Elite), most business databases, quite a few packages from JISC Collections, and all Open-Access platforms.
  2. “Vendor packages”, made up of a selection of individual titles from a single publisher or journal aggregator. Although all the titles exist within the knowledgebase, ready to be selected, EBSCO have no way of knowing in advance which titles we hold (save for a few titles for which EBSCO Information Services act as our ‘subscription agent’ – keeping up with all this?), nor the details of our full-text holdings. These packages (which include most of the high-impact scholarly journals from recognised academic publishers; those which—by definition—the Academic Subject Librarians have chosen on their constituencies’ behalf) are hard work to maintain, as well as being very prone to error. For any more than a small handful of titles, we can’t possibly keep on top of them ‘manually’, and must rely on downloaded publishers’ holdings reports, which we then have to process into an EBSCO-friendly, tab-delimited format before uploading them to the A-to-Z. Publishers rarely make their holdings reports available in an immediately usable format, and subscription holdings tend to be irritatingly regularly subject to change, making this the Forth Bridge (Sisyphean task for non UK-ers!) of e-resources admin. We’re starting to try and reduce the size of the job by looking to see if all of these packages are absolutely necessary: I’ve a suspicion that some of the smaller publishers could be rolled up into the larger ‘aggregator’ packages with no loss of access.
  3. “Other” titles that don’t belong to any package. These represent a tiny proportion of our e-journals (we currently list 45 “Other” titles out of 72,000 = 0.06%) and an even more minuscule proportion of our overall usage… BUT are responsible for a disproportionately large amount of work: especially around authentication. For that reason, I try and keep the number of “Other” titles to the absolute minimum possible. I’ll use any excuse to drop one :-)
  4. Finally, what EBSCO refers to as “Custom” collections (we have 13 in total): ‘local’ packages (for local people?): stuff that doesn’t appear in EBSCO’s knowledgebase at all. This is a grab-bag of oddities, experiments, print holdings (surprisingly popular), RSS feeds, and packages with really, really funky authentication requirements. Same as for the Vendor packages in 2, we have to add these to the A-to-Z by constructing and uploading a tab-delimited file. Again, I battle to keep these “Custom” packages to a minimum: but in actual fact they’re less trouble than they might be. We have complete control over the data, so they’re relatively easy to update, and they tend to be fairly low-maintenance once they’re up and running.

You can browse a list of our current e-journal packages at: http://lncn.eu/h59

I’d really, really like to simplify things, especially for classes 3 and 4. Question for fellow e-resources librarians: what tricks do you have for managing your e-journal packages and holdings information?

Notes on IP authentication in libraries

Posted on May 20th, 2011 by Paul Stainthorp

This post follows on from my earlier authentication rant – here’s where I try and get a bit more constructive. Starting with the fundamentals:

IP authentication to electronic library resources… ‘s easy, innit? Nothing to worry about. We just give the details of our IP ranges to publishers, and they allow any computer with an address within that range (i.e., one of our on-campus computers or a mobile device connected via our wifi network) to access site content which is otherwise restricted: for example, a full-text PDF journal article.

Some notes:

(Thank you to Elif Varol for chasing down some of these details across the Internet, and to @aekins and others who supplied their expertise via Twitter and email.)

  1. There are a few different ways of expressing IP ranges (‘notations’); a publisher may specify we give them our IP range(s) using a particular notation:
    • The standard dotted quad notation a.k.a. dot-decimal notation, made up of four eight-bit numbers (octets), generally expressed as decimal numbers, separated by full stops:
      • Full range e.g. 204.245.240.0-204.245.240.255
      • Range within the last octet e.g. 204.245.240.0-255
      • Wild card within the last octet e.g. 204.245.240.* (N.B. these first three are all equivalent to each other.)
      • Ranges and wild cards within higher octets e.g. 204.245.[8-11].* (The square brackets aren’t always necessary.) Some publishers will not accept these more complex ways of expressing ranges, so we have to list each range separately using wild cards only in the last octet, i.e. 204.245.8.*; 204.245.9.*; 204.245.10.*; etc.
    • CIDR notation (much less frequently asked for):
      • e.g. 204.245.8.0/22 (Where /22 represents the number of most significant bits—i.e. counting from the left—which are common to both the top and bottom ends of the IP range. I’ve not expressed that very well, but that’s how my brain deals with it! In the above example, the range: 204.245.8.0-204.245.11.255 expressed in binary is: 11001100.11110101.00001000.00000000-11001100.11110101.00001011.11111111 (You can see that the 22 most significant bits [in red] are common to the top and bottom addresses of the range. There’s a useful IP-range-to-CIDR converter tool at: ip2cidr.com)
  2. But is it safe to hand out the details of our IP address ranges like this? I’ve certainly seen one ICT colleague’s eyelid twitch when I’ve mentioned this is what libraries do (and have been doing so for ages).
  3. Some university libraries route all of their web traffic through a small number of proxy servers, so that all users broadcast a handful of individual IP addresses – this reduces the complexity of the information they need to give out to publishers. Apparently (though no-one appears to want to give me a list), the University of Lincoln now has a single ‘apparent‘ external IP address for each University building (i.e. some 45+ buildings, not including agricultural buildings) and one for each wifi network. This ought to make it possible to associate usage with an individual building or group of buildings. Does anyone do this? Strikes me it would be very useful to be able to say, for instance, “X% of usage of ScienceDirect comes from within our Science building”. We have at least one resource where usage is restricted to within libraries only – luckily, we do know the ‘apparent’ IPs of our own buildings.
  4. Any change to a library’s IP addresses will have to be communicated to a large number of publishers. We have in our ERM spreadsheet an (almost-certainly incomplete) list of publishers who hold our IP ranges along with their contact details, so that we know who to inform if there’s a change… but this process worries me; it’s asking to have errors and inconsistencies introduced. I’d much rather register or publicise my IP ranges once and centrally (on the University’s own servers, or via a shared registry service like OCLC’s WorldCat Registry) and have all publishers pick them up from there.
  5. The vast majority of IP-authenticated resources perform this authentication automatically, but a tiny few oddities (including the handful of engineering journals we take via the IEEE, I think) seem to require that the user clicks on an explicit ‘authenticate via IP’ link first. Why?
  6. There’s an obvious problem for users who move between on-campus and off-campus computers (i.e., most users!); they will not get the same seamless access to restricted content, and some resources (e.g. Index to Theses) may only be available from within our IP range. How do libraries handle the transition between IP and other kinds of authentication for off-campus users? Through ‘user education’ (lovely phrase that, covers up all sorts of system difficulties!), or by trying to design a system that recognises the user’s location (“geoaware”) and routes accordingly to hide the transition? There was a useful JISC Publisher Interface Study (2009) which explored some of these issues
  7. Proxy tools such as the much-vaunted EZProxy or our own dear LibResProxy (which I’ve been informed are both actually ‘reverse’ proxies [edit: or possibly some other flavour of URL-writing proxy??] – my eyes started to glaze over at that point…) are a useful bodge for providing simple off-campus access on the same basis as on-campus IP lookup: effectively they ‘mask’ the user’s actual, off-campus, out-of-range IP with an in-range, institutional IP address by routing the user (who must log in to the proxy tool first) through a server on the campus network. Libraries that use EZProxy swear that it simplifies things greatly for the user, is very reliable, and reduces the number of support queries compared with e.g. Athens/Shibboleth… but at the same time, proxies seem to be looked down upon by the library/information ‘establishment’. I understand that they don’t offer the same opportunities as federated access for personalising the user experience; they can be slow, too. But my suspicion is that users will go for straightforward, predictable, reliable full-text access over personalisation, nearly every time.
  8. All of what I know about IP address authentication applies to IPv4. What, if anything needs to change to take account of IPv6?