Possible addition of a new Work Package: Electronic Item Availability Service (EIAS)

The original middleware component (see WP6) is now to be known as Library Item Availability Service aka LIAS.

We have identified the need for a second middleware component which will be known as Electronic Item Availability Service aka EIAS. Development of this second service requires the addition of a new work package.

Specification of Electronic Item Availability Service (EIAS)

As part of this project we would like to additionally display availability information of Electronic Items by querying the University’s SFX (Ex-libris) OpenURL resolver, for example, items located in PubMed. When the project was devised it assumed that availability data for both library and electronic items would come from the same source. As we have progressed, it has become clear hat this is not the case.

To do this we need to write a second Middleware component (EIAS). We need two separate middleware services as we don’t want to bundle two requests into one call. The reason for this is that the SFX API may have performance issues and so to make sure that we don’t block the getting of responses from primo by a slow SFX we make them separately. We may hit the browser connection limit:


so may need to submit batch requests. We need to investigate this as ideally we would like to have smooth steady page updates. The call to EIAS will have to pass the whole OpenURL; we are currently not sure whether we should use DAIA again or something different.

This work package will also involve writing an additional extension to the Juice library which talks to the EIAS alongside the one that talks to the LIAS.

We must also make sure that we handle both LIAS and EIAS availability information consistently in terms of the user interface.

Posted in Uncategorized | Tagged , , , , , , , | Comments Off on Possible addition of a new Work Package: Electronic Item Availability Service (EIAS)

OpenURLs in Sir Louie

As part of the Sir Louie project we are passing details of an reading list item from the library catalogue to a reading list in our VLE (WebLearn). To do this we are using the existing OpenURL Standard. For this integration to work the library catalogue will generate links to WebLearn with details of the item to be added to the reading list encoded in the URL using the OpenURL standard. WebLearn then has to parse the URL and add the new item to the list.

Originally the OpenURL standard (version 0.1) was small and very limited in it’s functionality, but for implementers this meant that it was reasonably easy to implement the complete specification. By contrast OpenURL 1.0 is a large (104 page) specification, so when looking at implementation it we had to decide which parts to support. The writers of the specification seem to be aware of this by including support for profiles in the standard, these are sets of supported features that an implementation supports. The standard profiles are available from a profile registry and although these reduce the amount of standard that you need to support they are still moderately large. So for the Sir Louie project we are effectively defining out own profile and trying to keep it as slim as possible.

The Sir Louie Profile

This is in practice the Sir Louie profile which we will be supporting for sending items.


Entity Description Required Example
Referent The Entity about which the ContextObject
was created—a referenced resource
Mandatory A referenced
journal article


“http” URI Scheme info:ofi/nam:http:
“https” URI Scheme info:ofi/nam:https:
“ISBN” URN Namespace info:ofi/nam:urn:ISBN:
“ISSN” URN Namespace info:ofi/nam:urn:ISSN:
Digital Object Identifiers info:ofi/nam:info:doi:

Character Encodings

UTF-8 Unicode info:ofi/enc:UTF-8


KEV info:ofi/fmt:kev

Constraint Language

Z39.88-2004 Matrix info:ofi/fmt:kev:mtx

ContextObject Format

KEV ContextObject Format info:ofi/fmt:kev:mtx:ctx

Metadata Formats

KEV Metadata Format for Journals info:ofi/fmt:kev:mtx:journal
KEV Metadata Format for Books info:ofi/fmt:kev:mtx:book
KEV Metadata Format for Patents info:ofi/fmt:kev:mtx:patent
KEV Metadata Format for Dissertations info:ofi/fmt:kev:mtx:dissertation


Inline OpenURL info:ofi/tsp:http:openurl-inl

In practice this means that the features supported by this profile are very similar to the original OpenURL 0.1 specification. Our library catalogue already generates URLs like this so most of the development work has been in getting WebLearn to parse these URLs and store them internally in the reading list.
There are a couple of very good articles about how to construct simple OpenURLs available as part of the work that has been done on COinS which is a specification for placing an OpenURL in an HTML page so that automated tools can extract bibliographic information about the entries:

Posted in Uncategorized | Tagged | Comments Off on OpenURLs in Sir Louie

Just a thought regarding performance issues!

If have performance problems with large numbers of XmlHttpRequests not working well in browsers we could possibly look the streaming the response back to the client through a long(ish) live connections, so the browser sends all the IDs it wants looking up in one go and then the server makes one or more requests to the availability services and the sends the responses down the connection as it gets them. This would mean that we weren’t talking DAIA but might be something to worth thinking about….

More details about HTTP streaming to a browser are on: http://ajaxpatterns.org/HTTP_Streaming

Posted in Uncategorized | Tagged | Comments Off on Just a thought regarding performance issues!

Middleware Component Specification: Library Item Availability Service (LIAS)


As part of the Sir Louie project we need to display availability information for items in a reading list. The reading list will be rendered as normal by WebLearn (Sakai) but then once the page is ready in the web browser some JavaScript (Juice) will run which will scan the page for embedded metadata (COinS) and use this information to make additional requests to retrieve availability information.

In Oxford the availability information is in Ex Libris Primo for locally held items. As it is not possible for the JavaScript to contact Primo directly, the requests from the browser will be sent to a piece of newly developed middleware which will in turn contact Primo to get availability information. We have called this component the Library Item Availability Service (LIAS) and here’s how this will work:

The reasons for needing the middleware include:

  • for security reasons browser requests are limited to only talking to the server from which the page originates or a to service on another host that must return data in a specific format to get around this restriction (JSONP). The source of availability information (Primo) doesn’t currently support JSONP and is hosted on a different server to WebLearn.
  • the document returned from Primo when asking for availability information is very large, this makes it slow to transfer and then slow to parse by a JavaScript client.
  • the document returned from Primo is complex and we would need to embed complex parsing code into the client, this limits the ability to reuse the client side code.
  • if the availability source or format changes in the future the clients can be isolated from this change by updating the middleware.
  • the Primo services are IP restricted in their deployment here, so access from a user’s web browsers isn’t currently possible.

WebLearn Reading list to LIAS

The JavaScript in the reading list page will make asynchronous requests to a web application deployed on WebLearn using the Document Availability Information API (DAIA), currently version 0.5. DAIA allows for two encodings, JSON or XML, to ease parsing on the client side we should initially develop with JSON, but XML should be considered as a future extension. Only some of the items in the list will have been retrieved from searching Solo and so we will have a docId and can find local availability information. JSONP may also be a format we wish to support to allow for wider adoption. The implementation should support multiple queries per request and will probably be deployed on a URL such as https://weblearn.ox.ac.uk/library-availabilty/ .


Request: https://weblearn.ox.ac.uk/library-availability/?id=UkOxUUkOxUb15585873&format=json


 "version" : "0.5",
 "schema" : "http://ws.gbv.de/daia/",
 "timestamp" : "2009-06-09T15:39:52.831+02:00",
 "institution" :
  "content" : "University of Oxford",
  "href" : "http://www.ox.ac.uk"
 "document" :
    "id" : "UkOxUUkOxUb15585873",
    "href" : "http://XXX/XXX.pl?id=15585873",
    "items" :
       "id" : "RSL",
       "content" : "Radcliffe Science Library"
      "storage" :
       "content" : "Level 2"
       "id" : "SCCL",
       "content" : "St Cross College Library"
      "storage" :
       "content" : "Main Libr"

Retrieval of Availability by LIAS

The availability information for items in the Oxford catalogue can be obtained by making requests to Primo via it’s web services. Primo has two APIs (WebServices and XServices) and it is expected that the XServices is the easiest to integrate with. Copies of the documentation can be obtained but due to certain inaccuracies, developing against a live service is the only practical option.



The response has been trimmed to only show the relevant part (some URLs have been removed to protect the innocent)

   <sear:collection>Main Libr</sear:collection>
   <sear:callNumber>(0360 h 015/01)</sear:callNumber>

General Notes

LIAS should map the results retrieved from Primo onto DAIA and return them to the client. Although the service will deployed initially with Sakai it should not bind against any Sakai APIs so that it can be reused in other environments. The service should support caching of results retrieved through a tuneable cache (for example, osCache). The responses returned to the client may also be cacheable by the browser for a short period of time. No authentication of requests is needed. It should be written in Java using Maven as it’s build tool.


We may at some point in the future wish to also support an additional API to retrieve availability information of remote resources (typically electronic) as this information doesn’t appear to be available directly in Primo. This API has yet to be designed but it’s envisaged that it may well involve sending an OpenURL to the middleware component and having that passed through as a request to the SFX OpenURL resolver’s API. The returned data from the resolver is in XML which would be transformed into JSON, the format of which has yet to be discussed.

Keeping the two separate APIs distinct means that we don’t have to wait for both the return before returning a response and we can use an appropriate format for both, although it does push the burden onto the in-page JavaScript.

We may also wish to wrap up the Primo search API so that we can get docIds for items that were added to the list by a method other than searching Solo directly. This is because if an item is added through importing an existing list through RIS we will most likely not have docIds for these items so won’t be able to display availability information. Being able to use the search API would mean we could lookup the docId for imported items.


Posted in Uncategorized | Tagged , , | Comments Off on Middleware Component Specification: Library Item Availability Service (LIAS)

Juice project code licence (MIT) now compatible with Sakai

When the Sir Louie started we were very keen to use the open source Juice JavaScript library but felt we would be unable to do so since the licence (GPL) was not compatible with the Sakai licence (Educational Community License, version 2.0).

We contacted the main developers and asked if they would consider taking the same approach as JQuery and adopt a dual licensing approach. The suggested action was to add an MIT licence.

We are happy to announce that Talis (who are the main developers of the library) listened to our cause and have now (from version 0.6.4 onwards) agreed to the dual licensing approach.

This leaves the Sir Louie project free to build upon Juice and bundle the Juice library with Sakai Citation Helper (should our improvements to this tool be accepted by the Sakai community).

  1. Educational Community License, version 2.o
  2. MIT licence
  3. http://juice-project.org/
  4. http://groups.google.com/group/juice-project-discuss/browse_thread/thread/db95570161e854a1?pli=1
Posted in Uncategorized | Tagged , | Comments Off on Juice project code licence (MIT) now compatible with Sakai

Availability Information: DAIA or DLF-ILS DI that is the question!

Part of the Sir Louie project involves placing dynamic ‘real-time’ availability information into our reading lists. There are two existing standards which may be of use in this area:

  1. Document Availability Information API (DAIA)
  2. Digital Library Federation – Integrated Library Systems – Discovery Interface (DLF-ILS DI)

DAIA is  is currently at version 0.5 and is a reasonably small specification specifically designed for availability information between services. It has a simple REST based API, a simple data model and allows encoding in XML or JSON. Lots of fields in the data structure are optional, however, it provides a rigid specification to write against.

DLF-ILS is more mature and stands at version 1.1. It is a larger technical recommendation aimed at improving interoperability between Library systems and external discovery tools. It covers harvesting records, availability information, patron functions and OPAC functions. It is a broader recommendation although it isn’t a specification as one is free to choose which bindings to use for a particular function.

Initially we were looking at using DLF-ILS-DI as it allows for availability information to be represented at the bibliographic level as well as the item level.  Looking at the (somewhat sub-optimal) documentation  the Primo web services we believed that we would be getting bibliographic information back from Primo and DAIA only allows availability to be represented at the item level, however, when we began to experiment with the Primo web services it became clear that we were actually getting item level information back.

So we were left with the choice of which to use. In the end DAIA looked to be the better bet because:

  1. It is a reasonably strict format so anything that can talk to one DAIA implementation should be able to talk to another. DLF-ILS-DI is just a broad recommendation and doesn’t guarantee interoperability.
  2. It is a smaller specification which makes it easier to understand and implement.
  3. It is friendly to web browsers in is’t support for JSON and simple REST based calls.


  1. DAIA:  http://www.gbv.de/wikis/cls/DAIA_-_Document_Availability_Information_API
  2. DLF-ILS: http://www.diglib.org/architectures/ilsdi/DLF_ILS_Discovery_1.1.pdf

Matthew Buckett

Posted in Uncategorized | Tagged , , | Comments Off on Availability Information: DAIA or DLF-ILS DI that is the question!

What’s in a (window.) name?

Sir Louie will make it possible to build a reading list in the WebLearn (VLE), (based on the Sakai open source platform) by searching for items using the University’s library search interface SOLO, (based on Ex-libris Primo). This concept is based on the existing functionality in the Sakai ‘Citation Helper’ module which allows you to do something similar but using Google Scholar to search for items to add to the list.

The way the Citation Helper works with Google Scholar is that once you create a reading list in the Citation Helper, you have the option to “Search Google Scholar”. Clicking this option pops up new browser window with Google Scholar loaded.


As you search Google Scholar, on each item in the results, in addition to the usual options Google Scholar supports there is an additional ‘Import into Sakai’ (or equivalent wording) link. When the user clicks this, the citation gets saved into the Citation Helper reading list they are working on.


The way this is achieved is that an additional URL parameter is passed to Google Scholar that tell it to both offer the link back to Sakai and which particular Citation Helper reading list items should be added to. For example, a URL could look like this:


The ‘linkurl_base’ parameter is persisted by Google Scholar throughout your search session – so each time you click a link in the Google Scholar search, it passes the linkurl_base around, so it always knows where to pass any citations you save back to.

When we came to look at achieving a similar functionality with SOLO, we had a challenge – how to tell SOLO that the user needed to see the ‘Import into WebLearn’ link, and how to ensure SOLO was able to pass the citation back to the appropriate Citation Helper list? Having decided that we would use Javascript to add some aspects of functionality to the SOLO interface, what we ideally wanted was a mechanism that:

  • Did not require any major modifications to SOLO
  • Allowed any stored values to be easily be read by javascript
  • Would allow us to display the appropriate link back to WebLearn for a single search session on SOLO

While we considered copying the Google Scholar approach it would have meant significant work to enable SOLO to pass around a new parameter in the URL, and ideally we didn’t want to make significant changes to the system.

We decided to look at more ‘lightweight’ ways of passing this information between the two, otherwise unconnected, systems. The first approach we considered was using a Cookie to store the relevant information. However, WebLearn and SOLO are on different subdomains of *.ox.ac.uk, and we weren’t keen on setting a cookie on the top-level domain, not least because if we were going to offer this development to other sites there was no guarantee that in other situations the Sakai installation in question would be in the same domain as the library system.

We also looked at passing a URL parameter to SOLO which could be immediately written to a cookie at the SOLO end, within the SOLO domain. However, the load balancer that sits in front of SOLO removes any extraneous parameters from the URL before serving up the SOLO search interface to the user. While it would have been possible to investigate this further and see if the cookie could be written by the load balancer page, again this seemed to be getting deeper into changing the system than we wanted, and raised concerns about sustainability and transferability again.

So, having decided Cookies were not going to do the job in this case, we had to start looking at other options. One suggestion was using the browser window.name property to store information. The window.name could be easily set from WebLearn when opening a new window for the SOLO search (simply by including a ‘target’ attribute in the link – e.g. <a target=”WebLearn” href=”http://solo.bodleian.ox.ac.uk”>). The window.name property can easily be read (and if necessary set) using javascript, and of course didn’t require us to make any particular changes to SOLO beyond the javascript we were going to use to display the ‘Import into WebLearn’ link. So the only question remaining was – would it work?

Some research quickly turned up some examples of people using window.name for very similar reasons to the ones that had led us to investigate it. Most notably Thomas Frank has written a javascript that enables the use of window.name not just to store a simple value, but to store all kinds of session variable – see http://www.thomasfrank.se/sessionvars.html for more details. Surprisingly window.name can store several megabytes of information – the limits are imposed by the browser rather than any particular limitation to the property itself, and Opera seems to be the most conservative (and possibly most sensible) limiting window.name to (only!) 2Mb.

Storing session variables in window.name is not without its issues – specifically security is a concern. As window.name isn’t limited to a domain (unlike cookies, and one of the reasons it is useful to us), anything you store in window.name can be read by any other web page. This means clearly you don’t want to store any information that could result in personal or secure information being intercepted. Thomas Frank considers security issues at the bottom of his piece at http://www.thomasfrank.se/sessionvars.html, but for a sceptical view of the use of window.name see http://www.boutell.com/newfaq/creating/scriptpass.html, which describes three ways of passing data between webpages using javascript, including the use of window.name.

We feel reasonably confident that the security issues raised will not impact on us, as we are not passing any personal or secret information around, and so for us window.name looks like the most promising approach to enable WebLearn to pass information to SOLO.

Posted in Uncategorized | Tagged | Comments Off on What’s in a (window.) name?

Modifications to earlier screen shots

Since the original screen shots were posted SOLO has been upgraded and now has a new search interface (original post: http://blogs.it.ox.ac.uk/sirlouie/2010/08/24/proposed-new-user-interface-for-the-reading-list-tool/).  Here are new mock-ups of the “Import into WebLearn” links.



Posted in Uncategorized | Tagged , | Comments Off on Modifications to earlier screen shots

An video explaining the project

This video of me at JISC’s Flexible Service Delivery seminar in Nottingham on 9th September will probably become a cult internet hit one day.

In the meantime it should help to clarify what we are planning to do within the Sir Louie project.  The video:

  1. http://www.youtube.com/watch?v=u59otbRSsqE
Posted in Uncategorized | Tagged , , | Comments Off on An video explaining the project

Owen Stephen’s post about Sir Louie

see: http://www.meanboyfriend.com/overdue_ideas/2010/08/sir-louie/

Posted in Uncategorized | Tagged , , , , , , | Comments Off on Owen Stephen’s post about Sir Louie