Sunday, March 23, 2014

Is known item searching *really* an issue for web scale discovery?

Is known item searching really a big issue in Web Scale discovery?

Since I began looking at web scale discovery in 2009-2010, I've read many librarians comment on how known item search is harder in web scale discovery and it's not just the rank and file librarians.

In the latest Ithaka S+R US Library Survey 2013, in the section on discovery, for the question "To what extent do you think that your index-based discovery service has made your users' discovery experience
better or worse in each of the following areas?", Library Directors felt that "Helping users find  items they already know about" was Discovery's weakest area. (Figure 35)

On a personal note, when we went on to implementation of Summon in my own institution, some of the most negative feedback we received were from graduate students and Faculty, who lamented that many of the items they were looking for the catalog were now hard to find.

Hence it is with great interest I noticed the following Tweet by Dave Pattern of Huddersfield University Library, a known library innovator and a early adopter of Summon.
I believe he was reacting to earlier tweets coming out of ER&L 2014 , where a presenter claimed to have improved results by tweaking the ranking to improve among other things known item search. What followed was a long ranging discussion on Twitter, with many librarians and technologists working on discovery systems giving their two cents worth

Some claimed they heard of such complaints but could never get a credible non-contrived example and most examples surfaced were due to spelling errors. This group felt it could be more of a perception issue.

A few others felt it was a really problem at first, but the issue has been improved over the years.

Yet others (a smaller group) felt it was an important issue.

I myself am of the view that it is a issue that has gotten better with time, though issues remain. And yes, often the user complaining just remembers the one time out of hundred where the known item search fails to bring up the item, but it is still frustrating for a full tenured professor to suddenly fail to find a simple item when previously they could.

It's somewhat difficult to generalise though because some of us commenting are using Primo Central, others Summon etc.

Even within Summon implementations, results can vary as I have found often by comparing "fails" here with other Summon libraries

 1) Types of packages switched on (e.g If you turn on (where you can't generally tweak algorithms) Hathitrust, newspaper database packages, known item searching of catalog results get worse due to "crowding out" effect)

 2) Cataloging

That said, the question remains how bad is the known item search issue? Even one who is skeptical of known item search issues, will probably concede it will happen because there are many more results to sort through.

You can see this is the main reason for the problem, because most problems will disappear the moment you click "item in library catalogue" in Summon.

Currently in our instance of Summon, typing Freakonomics (which is the part title of a popular book commonly known as such and a frequent course reading), gets you only journal articles, newspaper articles, book reviews, anything but the book.

But refining to Library Catalog gets you the item.

The book Freakonomics become found only by restricting to item in Library Catalog

I agree that Discovery systems have harder jobs than opacs, but that is cold comfort to someone who used to be able to find a known item with one search in the OPAC.

Admittedly, as someone who is the point person to complaints on discovery services here, such issues loom large on my mind. Randomly looking through search logs in Google analytics also helps notice issues, though in reality the issue may not be that big.

There have been attempts to quantify this difficulty.

Most recent was Emily Singley's Discovery systems – testing known item searching where she tested 8 libraries using the 4 major discovery services.

The test is interesting in that it tried 4 types of queries
  • Single word titles (e.g. 1984)
  • Titles with “stop words” (e.g To have and have not)
  • Title/author keyword (e.g. Smith and On beauty)
  • Book citation (copied from bibliographies)
  • ISBN  
The results showed, WorldCat Local (name change to Worldcat discovery service coming?), came on top. Google was slightly behind followed by Summon, Primo Central and EDS.

Though interesting for comparison, the main issue as pointed out in comments was that the test set was not from a real world examples. Of course, Emily herself admits the test is "cursory".

Some libraries have done more specific tests like testing the top 1,000 most frequent known item search queries in logs to show their discovery service performs almost as well as the traditional OPAC. In my institution, we did the same for journal title/name searches, databases and books before launch. This helped a lot, but the long tail of searches means users will still run into issues in many cases.

Fear of issues with known item is not without precedent

In fact, this fear of known item search becoming harder has precedent before the current era of web scale discovery.

When library moved towards keyword searching as a default via "Next Generation catalogues" like aquabrowser, Encore, Primo there was a fear that known item searching would become harder compared to title browse.

I remember as a newbie librarian sitting in a committee worrying that keyword search would made known item search harder.

Was this fear borne out?

Known item searching - keyword searching vs title browse - a systematic test

Perhaps its instructive to study this example by University of Minnesota Libraries, where they systematically studied the effects of switching from

i) MNCAT classic - Aleph (Traditional catalogue typically title browse is default)

ii) MNCAT - Primo (basically next generation catalogue with keyword searching but no article index)

iii) MNCAT Discovery - Primo Central (Same as ii but includes article index)

H/T found via comment on Emily Singley's Discovery systems – testing known item searching blog post.


As explained in the very informative video above,  they randomly selected 400 items from search logs from their traditional OPAC to create benchmarks for MNCAT classic (OPAC) and MNCAT (Primo) and eventuall MNCAT Discovery (Primo Central)

These 400 may include items that the library did not have.

MNCAT classic was tested with "Title begins" - Or Title browse


MNCAT was tested with Keyword search.

If the entry appeared in the first 10, or "Did you mean" for MNCAT, it was considered found.

The results showed that 90% of results were the same (66% appeared in both, 24% neither).

8% of the time MNCAT classic found the item but MNCAT did not. And 2% the reverse happened.

The video goes on to study the differences in results.

What's the bottom line?

Technically the classic catalog won. 98% of the time, the classic catalgue worked correctly with known item searches , while the next generation catalogue with keyword searching worked correctly 92% of the time. (Assuming when neither search finds it, it is working correctly)

Is this difference significant? I would argue not.

Our own experience shifting to keyword searching in III Encore - a next generation catalogue also backups this experience, that keyword searching is generally as capable as title browse for finding known items.

A lot depends on how the algothrim ranks items of course (III's Encore algothrim is very well turned for known item searching matching title fields as highest priorities), but it seems to me as both traditional OPACs and next generation OPACs match only on traditional MARC and not on articles, so it's still relatively easy to get it right for known item searches.

What happens when you add a article index?

It will be very interesting to see the University of Minnesota Libraries results when they benchmark against MNCAT Discovery (Primo Central).

I will guess that known item search would be significantly worse (maybe 85%? particularly if we see author + title combos) without lots of customization because the challenge is now much harder sorting through all the newspaper and journal content.

A key to reduce this issue is the "Did you mean...." function. It's relatively easy to do this for searches for journal titles as some primo libraires have done, but needs to be done for books as well.

A "Did you mean" that could recommend popular textbooks based on circulation, presence in reading lists as well as other metrics could help as suggested by Dave Pattern.

There are other ideas not least which is bento style.....


It's pretty obvious that web scale discovery system will have tradeoffs and one of them is slightly less effective known item searching.

The question that isn't answered is, how big is the trade-off? The answers varies from audience to audience, my suspicion is that the popularity of Bento syle and or refusal to load catalogue data into discovery at some of the high ranked Ivy Leagues/ARLs suggests that known item search can be serious enough issues for some audiences to switch away from a "blended" style of results.

NCSU Libraries - Bento Style

The more graduate students and faculty you have, the greater likelihood they will be doing known item searches that aren't on the typical reading lists, "did you mean" checklists to help.

Granted a lot of searches they do can be challenging even for a traditional catalogue (looking for a particular edition of a common work for example), but web scale discovery makes it nearly impossible.

So what do you think? Do you think known item searching issue in web scale discovery is over-blown?

Tuesday, March 4, 2014

Library and Blue Ocean strategies (II) - Reconstruct Market boundaries for academic libraries

In my last post, I mused about blue ocean strategies and how libraries should consider spending time focusing more on blue ocean strategies.

I gave the example from the book of the declining Circus industry and how Cirque du Soleil changed the rules of the games. Instead of competing along the usual circus industry factors, they innovated by blending classic theater and reaching out to new markets drawing in the more intellectual crowd while reducing other elements like animal acts.

I think like most industries, libraries have always focused on red ocean strategies , basically how to make existing processes better. We are good at tracking our input and output statistics, at doing process improvement processes. Increasingly, we do bench-marking studies which focuses more on what other libraries are doing and making sure we do the same.

Red Ocean strategies are important no doubt and they will be always be the bulk of our strategies. But they won't suffice alone.

This is particularly so since our industry arguably shares characteristics similar to that of the circus industry, where the industry market demand is falling as users start to prefer other alternatives to our services.

Traditionally, libraries are also conservative and it's always a safer bet to try to improve some existing process incrementally then to strike out to try a new radical initiative.

Brian Mathews of Virginia Tech library (a library I think that leads the way with many new ideas) wrote a whitepaper : Think like a startup - A white paper to inspire library entrepreneurialism and talked about the need for true innovators.

He wrote

"Many library strategic plans read more like to-do lists rather than entrepreneurial visions. With all the effort that goes into these documents I’m not sure that we’re getting a good return"

and then goes on to say

"They don’t say: we’re going to develop three big ideas that will shift the way we operate. They don’t say: we’re going delight our patrons by anticipating their needs. They don’t say: we’re going to transform how scholarship happens. They don’t attempt to dent the universe." [emphasis mine]

Blue ocean strategies I think are exactly the type of strategies that are designed I think to help produce the kind of thinking that can "develop three big ideas that will shift the way we operate". 

Two chapters in the book in particular I found fairly interesting to help promote thinking to find such big ideas, are Chapter 3 - "Reconstructing Market Boundaries" and Chapter 5 - "Reach beyond existing demand".

Chapter 3 introduces the six path framework that help promote thinking to break out of the fundamental assumptions that underlie most industries traditional strategies.

I am going to try to use them in the academic library context. Sadly, I don't have any ground-breaking ideas (at least not ones I wish to share). What I will try to do instead is to try to examine the current "innovative" or "Radical" innovations academic libraries are trying circa 2014, and show how they could be seen as an attempt to find new blue ocean spaces of demand.

Look across complementary product and service offerings

This is probably the easiest idea to apply and it seems to me the bulk of new library ideas seem to come from here.

The idea here is to look at what happens before and after your service or product is used. Can you combine/absorb complimentary services under one roof making things a lot easier?

A toy example would be cinema operators making it easy for married couples to put their child at the baby sitters while they go out to have fun at the movies.

The academic library example of this could be summed up typically as  "support the lifecycle of scholarly communication"

Source :

This leads to a host of things beyond merely supporting searching for articles and books including
  • Reference management
  • Grant searching/ proposal writing
  • Research Data support
  • Operating Institutional repositories
  • Library as open access publisher
  • Support of research assessment and bench-marking (e.g Bibliometrics ) 
  • Providing technical expertise for pretty much anything the researcher might need help with to do his research

Arguably one could also fit the trend of combining IT with library support desks as well as  provision of computer workstations and other authoring tools in the library (the next logical thing after finding a book for your paper is to write on a pc!) as a way to combine complementary services under one roof.

Look across alternative industries

The book points out that "Alternatives are broader than substitutes... Alternatives including different products or services that have different functions and forms but serve the same purpose".

On the other hand, substitutes tend to have the same core functionality but may have different forms.

It's a subtle point, but the authors gives as an example , a CPA (Certified Public Accountant) and accounting software as substitutes because they have the same function but different form ie getting accounting done.

On the other hand, visiting a restaurant or cinema can be seen to be alternatives, they have different forms and functions (enjoying a good meal vs watching a good movie) but they arguably serve the same purpose ie enjoying the night out.

The idea here is to expand the market by embracing characteristics of alternatives and not just close substitutes.

It seems to me these definitions are a bit grey but let's see what I can do with them.

Patron driven Acquisition (PDA)  could arguably be one example. With PDA, users can look at a ebook in  a library catalogue and if they want it can get access with one click (and the library is charged), mimicking the ease of access of Amazon, iTunes etc.

Hence this combines the best of ebook buying industry with  traditional library cost, ($0 to the user).

But perhaps amazon ebook buying and borrowing books from the library are substitutes not alternatives.

In which case, the rise of maker spaces in libraries  in both academic and public libraries could perhaps be a even better example of looking across alternative industries and taking in the characteristics if not functions of alternatives.

An older example, could be the conversion of spaces in libraries to support collaborative learning and discussions. While this may vary from the traditional purposes of libraries of providing access to books and information, they do help draw usage of libraries by pulling in attributes and values of alternatives to visit libraries.

Of course such strategies run the risk of "mission creep", Hugh Rundle's "Mission creep - a 3D printer will not save your library" is a well known response to this,\

Yet another example could be web scale discovery services that marry the ease of use of web search engines with the academic content of databases.

The idea of  embedded librarianship where librarians leave the library and setup shop at offices of faculty/teaching hospitals can arguably also be seen as librarianship taking on characteristics of service industries like doctors making house calls.

Look across strategic groups within industries

This one is tricky, it involves trying to carve out new spaces across segments (typically segmented on price and performance) in a given industry. One example given was Sony Walkman in the 70s where it combined "the high fidelity of boom boxes with the low price and transistor radios within the audio equipment industry"

I am having problems coming up with examples for this, basically because libraries generally don't compete with one another, nor do we segment markets based on price and performance.

It could be I simply don't understand this one.

Look across chain of buyers

This simply points out that the purchasers who pay for the product might be different from the actual users.

Each group may value different things, so for example the person who purchases for a corporation might be more concerned about price and may be more willing to trade off functionality than the actual users.

The idea here is to see if one could target a different set of buyers than the traditional set.

The example given was Bloomberg in the 80s which started targeting individual analysts as opposed to IT managers. They added features that appealed to analysts, even including purchasing services for traders to buy gifts, book for holidays because while traders were wealthy they were also time poor.

Another example given was how a company shifted from targeting doctors, to targets patents to allow them to administer insulin themselves.

For libraries, I can think of the following examples.

Targeting faculty to influence students to "buy" reference and information services - this is pretty old hat.

A somewhat more unconventional idea was in "The Undergraduate Research Project at the University of Rochester" an ethnographic study of students.

They found "Students told us that their parents often edit their papers and advise them about assignments, so we decided to get to know parents through the library's sponsorship of the parent breakfast held during the class of 2010 orientation." (pg 12)

The other thing I can think of is how librarians through work on advocating for Open Access Mandates or citation/ Bibliometrics standards for promotion and tenure system can arguably influence "purchasing" of such related services from librarians.

I say arguably because cause and effect could be argued here.

Look across functional or emotional appeal to buyers

This refers to how most industries have either

a) Functional orientation


b) Emotional orientation

Companies that manage to challenge these orientations may unlock new oceans.

Examples given are Swatch, which added a emotional component and QB House which went the other direction to more functional based services where extras were stripped away and the focus was on speed.

I guess few would disagree with me that library services are strongly on the functional orientation.

One good example, I think is what Mal Booth's University of Technology, Sydney Libraries is trying to achieve.

UTS Library Spectrogram 

There is also increased focus on "user experience" with user experience librarians jobs and recently the establishment of Weave - Journal of Library Experience. 

And of course libraries are now also spending a lot of effort on how library spaces make people feel....

Look across time

This is pretty obvious, look at some trend and try to project how it will ultimately affect your business and move to that point first!

In some ways you could say libraries are not too bad at this, at least in terms of technology trends (or are we?). We are pretty early on most IT trends at least, trying everything from 24/7 chat services, web conferencing for classes to SecondLife (which didn't work out well), though arguably we dropped the ball on search.

We see the writing on the wall for library space to house print materials and many libraries are slowly preparing for the day where print is not as dominant.

The author states that the trends you are looking at needs to be

i) Decisive to your business
ii) Irreversible
iii) Clear trajectory

Besides the slow shift towards electronic away from print (completed for journals, and slowly moving for most monographs) and the trend towards increased access from remote areas, another trend I think that fits these three criteria is the rise of open Access.

Others may disagree of course, but if Open Access is going to be the norm, academic libraries should prepare for the day where a lot of their services will be disrupted and start to think how would an academic library look like if most articles were open Access?

Or alternatively as  Ithaka S+R Senior Anthropologist Nancy Fried Foster asks "what it would be like to design academic libraries based not on precedent, but rather on everything we can learn right now about the work practices of the people who already use them".


So here was my attempt to apply blue ocean strategies to find new markets. Not sure how successful it was, particularly since I concentrated on fitting in examples I knew about rather than thinking of new ideas which obviously made thinking new ideas nearly impossible.

Perhaps you can do better?

Saturday, February 15, 2014

Day in the life of a librarian - An academic librarian in Singapore 2014

Even though the Day in the life of a librarian project by by Bobbi Newman has ended, I have decided to continue this tradition to post about my day to day work every January.

I've was told that my blog posts are useful as a snapshot of the type of work academic librarians do in Singapore. While you can find information on what academic librarians do on the web, people interested in academic librarianship in Singapore, might be wondering if it applies here as well and the answer is as far as I can tell comparing notes with international colleagues, yes, our job scopes and tasks are mostly similar (barring differences as we are not faculty like some but then again not all of the US academic librarians are that either).

Here are past editions.
As usual, I am reconstructing a lot of it from my emails, calender etc.

Jan 20, Monday

My standard routine every morning hasn't changed much. Below is from the 2013 edition.

"As per my normal practice, I spend the first 10-20 minutes each day looking at search queries made in our FAQ system (on LibAnswers platform) , looking at Google Analytics to see which pages are most popular in various systems like our guides, faq, portal etc (particularly important this period so we can react quickly to developing situations) or to see the response to our marketing via email of certain pages .

Also for the past 6 months, I added an additional routine of trying out sample queries done by our users on our new discovery system to ensure nothing strange is happening. "

By now though, we have more than a year of official use of our discovery system Summon, so I pretty much have a system in place for noticing out of the ordinary searches that indicate a user is having problems with a query and this time I noticed abnormally high number of queries (or rather refinements) for searches related to Eighteenth Century Collections Online (ECCO) and Early English books online (EEBO) databases.

And when I checked I realized the problem wasn't that users couldn't find the entry in Summon (occasionally results might be buried and I need to create best bet or recommendation), but that rather the link displayed was broken.

So I email our eresources management team to updated the broken links in 360core (The default URL needed to be changed slightly).

I also spent some time via email arranging & confirming schedules for

  • A meetup with a new research staff - he wanted a quick session on EndNote
  • Confirming a meeting at the NUS Bukit Timah Campus to meet up with 2 new tutors and to give a lecture/briefing on Summon 2.0 which we just soft launched.
Answered a request to attend a meeting on evaluation of a new electronic tool.

Officially amended a popular FAQ on wireless access.

Today, I was also scheduled to do chat duty, which is one of my most favorite activities. There was the typical mix of questions about eresource access issues, passwords and the odd question from non-nus community people (including librarians and library students).

Jan 21, Tuesday

Received an email that I would be reappointed to the "Bibliometric Team". In a way this is nothing really new, I was a member of the previously named "Cited Reference team" since 2009, holding workshops on use of Scopus, Web of Science and recently Google Scholar to obtain citation metrics. More interesting, was the announcement of a big umbrella "Scholarly Communication Committee" under which Bibliometrics would be placed under.

Most of the pieces were already there, eg Team on Institutional Repositories, promotions etc, this just pulls it together. 

This morning, I also visited the NUS Bukit Timah Campus to meet up with 2 tutors and then give a talk on Summon 2.0 to students. Pretty productive meeting.

Produced statistics on demand for a project.

Jan 22, Wednesday

Struggling with the fact that the new lexis-academic interface accidentally broke our one-click 360 linking link resolver from Summon. Of course, we reported it the moment we found out to Serialsolutions (Now known as Proquest - see press release) because it affected linking to Straits Times (the national daily here) and was drawing some questions.

Of course, Serialssolutions acknowledged the problem, when we reported but based on past experience, it would take months for the linker to be updated. I tried to mitigate this by changing the database order in 360link to go to Factiva instead which had the same paper.

But today I got smart and tested Factiva, and it was broken too. Hmm. (See what I did on Friday)

Today is the day for information advisories, where I was scheduled for 2 face to face meetings with a student and a research staff.

First up in the morning was with a Phd student who wanted to trace a particular idea and terminology that spanned two different domains. The person wanted to see if there was any writings in the literature that mentioned explicitly that link. 

Incidentally, this resulted from a chat query we received last week after which the person requested for a face to face meeting as this obviously wasn't something to settle over chat.

I am not a subject expert by any means and knew zero about one domain and had a small amount of knowledge of the other due to some past personal interest. I of course did a quick read over a few days of the phd student's topic.

One of the things I've started to realize over the years is that assisting a patron is not always a matter of
  • asking for keywords
  • selecting a suitable database , tossing in different keywords & all the fancy search syntax
  • sending the resulting query to the user
This is not to say that this procedure has no value , tons of students are searching in obviously the wrong places so the moment you give them a search string in the right source, they get tons of exactly what they want and will just go "Oh Thanks great!" and disappear (especially via chat)

That's usually fine if you are talking about essay writing assignments .

The problem is when you are helping someone at a higher level and they are searching the obviously right places but can't find anything that is along the lines of "This paper PROVES exactly what I want to say".

My strategy in such situations is to try to leverage on the person's expertise (this phd student has given a talk on this very topic!) and try to ask questions - how did you come up with this idea of this linkage? Were you inspired by a certain book or article? (These may actually provide good starting points) etc to understand where the person is coming from and to suggest less direct approaches to give support to what the person is trying to say.

By serving as a non-threatening source to bounce ideas off, hopefully the beginning researcher benefits. 

At the end of the session after trying various techniques I knew of to try to find cross-disciplinary research and coming up with very little, we agreed that the idea of the link between the two domains seems obvious but as far as we could tell it has never being explored directly and probably would have to look at it more indirectly (e.g Literature on the idea of how terms and ideas are transferred from one domain to another definitely exist just not it seems in the 2 domains she was studying). 

It's somewhat touching to see some graduate students think we librarians have some super power and if they fail to find something it is because they are doing it wrong, so they feel assured when they are told that's not always true (this particular person has had help from other librarians as well before so she was definitely searching the usual suspects). Of course, it may be possible the exact thing the person one does exist  but I would wager probably not in a direct manner. 

In the afternoon, was another session with a researcher just hired by our institution,  turned out he wanted to learn mostly about EndNote.

Sent an email, informing about the upcoming changes made to Google Scholar, Proxy bookmarklet and Pidgin (the chat client we use).

Jan 23, Thursday

This morning received mail officially announcing the promotions of our new Deputy Director and the resulting promotions of 2 new heads of libraries. Already knew it was coming but it was good to make it official.

Today, I am doing something new! I spent the day helping to man the loan counter. While many of our libraries here have gone with the one combined desk approach combining circulation and reference, the library I am at Central Library is still the traditional 2 separate desk approach with the information/reference desk that I usually man, on a separate floor from the loans counter.

As a result, despite having close to 7 years of experience, I have never actually manned the circulation counter or checkout a book. As such, I've been asking for the opportunity to do so to gain experience for quite a while and this year I got my opportunity to do.

Today is just my second time, but what I experienced, just confirms what I already surmised - The idea of a combined desk makes so much sense! I guess, that's why most academic libraries have made the shift in the last few years.

Besides learning more about the intralibrary loan process, claiming of fines, membership issues etc, I got the opportunity to observe the types of questions people ask at the loans counter. 

This is of great interest to me, as I lead the team on Library FAQs/Knowledge base and am rated on how well the FAQs perform on answering queries.

As I expected, the queries were often a mixture of loans related and reference related. The library staff at the loans desk are extremely experienced with membership and loan related queries, however they were understandably not as familiar when asked about other aspects of library services.

Add the often long lines that appear (the semester just began), the staff at the loans desk just don't have the time to answer complicated or even relatively simple questions that take time to answer such as "I am a new student can you show me what services the library offers?"

As I was helping to shadow the existing library staff at the loans counter, I could help with such queries - essentially creating the combined desk approach, given that the counter had a spare PC as well to use.

I often suspected that a lot of interesting questions and opportunities for interaction was happening at the loans desk. Many new staff or student would automatically approach the loans counter (which was on the entrance level) to sort out teething issues when they start, so the very first person they encounter from the library would be the staff there.

This was proven so where I had the good fortunate to talk to a 3rd year student who informed me of a course that I wasn't aware of that just started this year.

While at the desk, I also received a query from another librarian about assisting with a benchmarking study to be conducted by a faculty.

Also confirmed a talk I was giving at another library in Singapore about implementing chat services. 

Jan 24, Friday

First off,  the whole morning was spent at a department meeting, talking about strategic plans etc. Happy to hear about one impending change.  As for myself, as usual I wonder if I have planned to do too much. We shall see.

After the meeting, had to rush to do a report meant for University administration, this meant pulling out statistics again.

Created a couple of draft faqs. 

Remember the linking issue I mentioned about lexisnexis academic earlier? Finally recalled that about a year ago, support at Serialssolutions once offered the option to turn off one-click on a provder basis. Requested they do so for Lexisnexis to fix the problem. Not ideal, since users will send on the 360link page and still have to click on "read article" link, but better than current situation where they just see a blank iframe.


My yearly blog posts are extremely disjointed. I suspect even some full fledged academic librarians might not be able to follow 100% of what I was writing about, much less people not yet in the profession.

Still, I believe it does give you a taste of what academic librarians do, though the mix of activities might differ.

If you are not a librarian, you might be wondering if the diversity of tasks I do are normal.  

I suspect I am more "generalist" than many librarians, but from what I understand after I attended an exchange program with the other 2 University Libraries in Singapore, most academic librarians in Singapore these days are structured along a system where one has some core area of specialty relating to librarianship (Circulations, Faculties management, Electronic resource management, Cataloguing, Reference, Bibliometrics, Library IT, Acquisitions etc) plus serving as a liaison to a specific department which usually carries some outreach, orientation and teaching duties.

In short, you need to be able to learn new things quickly, and expect lots of changes. This isn't the job for people who want to do the same things day in day out! 

Sunday, January 19, 2014

3 ways libraries try to help improve search results in discovery services

Library web scale discovery systems are great. They break down the silos between books , articles and other content types. They provide the "one-search" box experience that our users claim to want.

But problems exist (See my overview - 8 things we know about web scale discovery systems in 2013  and outstanding issues ). In my experience, one of the most sticky issues is the question of getting relevant results.

A typical academic library catalogue system typically serves up say in the range of 1-10 million possible entries (most of which are books) in the index. But once you add in articles, conference proceedings and even newspaper articles into the index, suddenly you get easily 300-500 million results (depending on how aggressively you include free content, newspaper articles, non-full text material etc), a 50-100 fold increase at least in many cases (see this old 2012 post surveying the index size of some ARL libraries on Summon).

Does this increase in content, make relevancy ranking easier (because there are now more possible "good" targets to surface) or harder (the "good" targets are buried by the noise of other irrelevant items) ? I suspect it may depend on librarians prudently adding sources with high relevance rather than adding the kitchen sink of sources just because they can (more to say about this in future posts). 

Leaving that aside, the issue is that of the "big 4" discovery services, few of them provide a way for the library to tune the relevancy system directly even if the library is unhappy with the results. The same relevancy ranking applies for all customers of Summon, Ebsco discovery service (EDS) etc. I believe of the 4, Primo Central is the only one that allows tweaking of the ranking, and only for the locally hosted version.

In any case, tweaking relevancy ranking even if allowed is not trivial.

So what can librarians do if they are not happy with the relevancy ranking?

Here are some of the implementation choices made by libraries, which seem to me to be trying to address perceived flaws in the relevancy ranking of discovery service for particular use cases without directly touching the relevancy ranking.

These changes tend to address issues in known item searching/ finding of catalogue items, known item searches as well as subject searches.

1. Change of default settings to exclude format types (newspaper articles/reviews) and other settings

While one cannot directly adjusting the relevancy ranking in most web scale discovery services, libraries can adjust other settings which does affect the results being shown.

For example, earlier adopter libraries on Summon, noticed an issue early on about how the results were often flooded by newspaper articles and book reviews. This prompted the implementation of a "Exclude Newspaper article" switch to be specially positioned prominently on the interface on top of presumably adjusting of weights for those format types downwards.

The exclude newspaper articles switch

The switch of course allows easy removal of unwanted newspaper articles by the user.

Still, during my survey of Summon libraries in How are libraries designing their search boxes? (I), I found quite a few libraries spotting a design similar to the next picture.

Some libraries have 2 separate check-boxes for excluding newspaper articles and book reviews instead of the combine done as in James Cook University Library, but the key point is the libraries have decided to exclude these items by default.

This is of course a serious trade-off because occasionally users are indeed searching for newspaper articles and they may not notice the defaults have them turned off and hence fail to find the item.  In my institution, I see many daily searches for items in our local daily newspapers papers and keyword terms that obviously refer to the hottest news topics.

In my institution we struggled with this decision as well.

In the end, we decided to exclude newspapers and book reviews (the latter was less controversial), because we found that in many cases without filtering of newspaper articles and book reviews by default, the results for known item searches for books, databases etc and to some extent subject searches would by much poorer due to too many newspaper articles and book reviews. In particular, we suspect users would get frustrated because they couldn't find a known book on the top 10 results thanks to the numerous book reviews and newspaper articles.

A full title search in Summon generally was fine, but *book title* + *author* for popular titles/ generic titles  or *partial book title* + *author* sometimes had issues.

An example would be the following search, gladwell outliers (full title is Outliers: the story of success by Gladwell), where with newspapers and book reviews filtered, the book would appear as of writing 6th/7th (in most catalogues it would be 1st) but a full unrestricted search it would drop off the top 10.

Arguably for libraries that decided to default to filtering away newspaper articles, essentially they are saying in most cases, the relevancy ranking isn't good enough to know when to display the right format types.

Interestingly enough Summon 2.0 seems to try to to address this with spotlighting/grouping of newspaper articles.

This grouping of newspaper articles has a "mini-bento" effect (see later), and it will be interesting to see if libraries start removing the exclude newspaper articles checkbox from their default searches with this feature in place.

While Ebsco discovery service like Summon does not allow tuning of the relevancy ranking, they generally allow libraries more options in terms of default settings include

  • Apply related words
  • Also search within the full text of the articles
  • Available in library collection (if Off - items not available to the library directly would be shown)

While most libraries like MIT Library and Georgia State University library by default turn on "Also search within the full text of the articles", I notice some libraries have chosen not to do so.

Georgia State University library EDS by default searches in full-text

For example, this library does not seem to turn on full text indexing

It's somewhat interesting that in EDS, you actually have to turn on full text matching, rather then it being the default for historical reasons so it's possible some libraries may not have turned this on by accident.

Then again, I know at least one library has explicitly decided not to turn on full text matching in EDS, because they claimed to find the results are generally worse.

If true, again, I find this choice fascinating, since one of the key points of web scale discovery services is being able to match in the full-text.

I would add that Summon does not have the option to search within metadata (though some librarians in the Summon mailing list have asked for it), I personally have simulated "metadata only" searches by matching keywords in title OR subject OR abstract OR title (this matches the default option in Scopus but of course is not a complete metadata only search) and I find the results can often be reliably superior for certain limited classes of searches, so there may be something in leaving out full-text matching occasionally.

2. Best bets / Placards / Known item calls out

When we launched our version of Summon, one of the complaints we received was difficulty for known item searches and this is even after we already expected it and tried to adjust for it by removing newspaper articles and book reviews by default (See above).

For example, we had a complaint that someone was unable to find the link to the journal Urban Geography because Summon was showing all books with that title in the top 10 and not a link to the journal record.

Our tests prior to launch did show that the vast majority of our 100 most searched journal titles (drawn from Encore, our old discovery catalogue) did indeed yield a link to the journal record in the top 10. Side note : we have a one record approach for journals - and we catalogue journal titles combining all print and electronic regardless of vendor in one record.

But Urban Geography was one of those journal titles that failed this test.

At the time, we couldn't do much about it. And then a few days later, Serialssolutions launched the "best bets" feature in late December 2012.

This allowed us to create messages and links to appear when a certain keyword was matched. So naturally I did one for Urban geography.

You can see how it appears below.

I've noticed that while Summon is pretty good at displaying the catalogue record or the 360core record for journals usually at the top, it sometimes fails for single or double title journals with generic titles.

Obviously, it correctly handles "Nature" and "Science", but there are journal titles like Oncology (0030-2414) for example that it fails to bring to the top (for our Summon instance at least). Sometimes it's a possible part title eg : Clinical oncology : a journal of the Royal College of Radiologists but user expects "Clinical oncology" to pull it up.

That's where best bets comes in

For database searches, you can also use the database recommender in Summon, though currently not every database you subscribe to can be added, so if a search doesn't surface our database catalogue record as a top 10 result, I create a best bet for it.

Typically the catalogue record to the database fails to appear when the user types some slight variant of the database title in our record.

You may be wondering why I don't seem modify the catalogue record for journals or databases to include variant names etc.  I've actually tried that but often it has very little impact on the ranking it seems. 

An example for us is Arxiv

It may be possible to compare with other Summon instances to see why their record is higher ranking (assuming they are) and if so is there something different about their MARC record but it involves so many factors (what other material is turned on etc), it's easier just to add a best bet.

As mentioned earlier, users also have problems with finding specific books generally they fall into
  • user types *book title* + *author* 
  • user types *partial book title*  + *author* 
The first usually works (assuming you already default remove newspaper articles + book reviews), unless the title is really generic and/or the author/book is extremely well (and known so there are many book reviews, articles mentioning it appearing instead etc).

A good example of one is 

The irony is in Summon (other web scale discovery systems may or may not differ), often typing just the book title what is history gets you what you want fine, but "helping" by adding the author makes it worse.

Incidentally it seems EDS also as a similar feature to Summon's best bets, see below - example from MIT Library

I am not too sure about the details of EDS's features, though I suspect currently it might be like in Summon - a manual process, where the librarian identifies a needed match, then sets up the message and link.

Obviously, a automated algorithm to automatically suggest known item search matches would be better. For instance, the search would notice a match or partial match (say 245$a) in the journal title or database and suggest a link to it.

I believe UIUC's Suggestion system has that level of smarts and is integrated into their Primo Central version.

3. Bento style systems

So far the first 2 implementations doesn't require much in-house capacity but the third way involves commitment of some resources and this is the bento style approach.

Out of the box, discovery services, provide a standard, one result list approach, with all item types interfiled together, which leads to the problems already mentioned eg. books, databases and other catalogue items "lost" among newspaper articles and journal articles.

Hence the idea of a bento style system, where you have multiple boxes of different content (sometimes by format) displayed on the same page.

Today this is a common idea, libraries from Princeton , Dartmouth , Columbia etc all provide this style of display.

To me the innovators in this space were NCSU and Villanova University's Vufind implementation

It seems to me right now, we are split into 2 different types of implementations from the functional point of view. (See different  degrees of  technical implementations)

There is a 2 column display approach, first implemented by Villanova University , which lists two columns one for the catalogue results - often dubbed books & more (data might be drawn from the catalogue or it would be the discovery service properly filtered) and another for the article level results (typically drawn from an article index).

Then there is what Lorcan Dempsey dubbs this Full library discovery , which typically has a number of result lists, including not just book and articles but also result lists for results drawn from silos like

  • Database & journal title lists
  • Library webpages
  • LibGuides
  • FAQs
  • Librarian profiles
  • Institutional repositories

One way to explain this trend is to say there is mixed evidence on whether the blended/one result list style is what users want according to a bibliographic Wilderness blog post.

If we take the Full library discovery view, we can say we are evolving towards an approach where the search combines our typical content with library services and expertise.

While such benefits are true, both approaches also seem to mistrust the ability of the relevancy system to appropriately rank contents of varying and disparate content types. Depending on the approach, one can just plugin the article index from discovery services and rely on other systems such as the ILS for ranking of catalogue results.


I don't want to give the impression that the relevancy ranking of web scale discovery services are horrible, I think they are usually serviceable but can be much improved (though perceptions and expectations of librarians vary).

But rather, the challenge facing discovery vendors is a big one as they need to rank across huge stores of data and content formats (each with different amounts of metadata and full-text) and across all subject domains (it's easier to rank results for subject specific databases since there is no ambiguity on what a term means) so holes do exist.

Add the challenge of telling when the user is likely to be doing a known item search or a subject search and the challenge mounts.

And by the virtue of the stated aims of discovery services, they compete directly with Google and Google Scholar, which has world-class relevancy (to put it mildly) so users have very high expectations.

It is no wonder there is dissatisfaction with relevancy ranking and librarians do what they can to help out.

BTW If you want to keep up with articles, blog posts, videos etc on web scale discovery, do consider subscribing to my custom magazine curated by me on Flipboard or looking at the bibliography on web scale discovery services)

Sunday, January 5, 2014

Questioning the status quo

I like to question why. I don't believe the status quo or the way things are is always the best that is possible. I ask why. I agitate for change. I try new things. 

But after over almost 7 years of working on various projects and initiating various changes,  "the way things are" have started to slowly shift to what I had a hand in deciding or at least help guide thinking in - in some areas at least.

In other words, I am slowly becoming part of the status quo in a small way. So it's really interesting to be at the other end of the questioning when new young (they are almost always young) staff who just joined the library ask me why things are the way they are.

This typically happens when I am partnering them at the reference desk. I am supposed to train them and impart knowledge and experience but I must admit, the most interesting part for me is after telling them a certain policy or procedure they go...

"I don't understand why we do it makes no sense to me, why do we do it?"

or the more diplomatic 

"This may be a stupid question, but I always wondered when I was a student why the library does X or does not do Y"

You have to understand the psychology of such questions.

When you are new to a work place, you generally try to learn and understand before trying to poke holes. This is true even if say you have a lot of experience from another library and most of our new hires are freshly hired librarians who have even less of a ground to stand on to ask such questions.

So for such fresh new librarians, to ask such questions implies that the current status quo defies common sense (for them at least) to such a degree to over-ride their hesitations to even dare to venture to question.

To put it more bluntly, they feel the thing I explained is so wrong (to them at the gut level), they can't resist questioning, despite knowing they know little.

As the bible puts it, Out of the mouths of babes..... , some of what they say can be quite insightful or interesting.

I am human enough to sometimes feel defensive sometimes (especially if the questions are about my work) but in general I find these questions fascinating and I try to consider them without the usual  "your lack of experience is showing" bias that more experienced people tend to have. (I like to think I succeed but only others can judge).

No matter how much you try to remain open minded and "question everything", how much you think you are biased towards change, in time as you rise up the ranks, you inevitably start to accept what exists as the norm.

That's human nature. So it's important to try to seriously consider the question from new staff who can see things with fresh eyes. 

The types of questions they ask tend to fall into  

1. Reasons exist for the status quo

2. No reason exist (as far as I know), just that nobody ever thought to question the status quo to change it for the better.

When I say reasons exist, I don't necessarily mean good reasons exist, though of course often they do. 

Often the new staff questioning just isn't aware of the full details or just haven't thought deeply enough on the issue and when I explain the reasoning or the difficulties they accept that the existing status quo isn't obviously silly.

It is of course sometimes a matter of judgement on what is a good reason for the way things are. But I often find it interesting when newcomers raise the same objections I myself actually agree with when I explain the official line.

Other times, I like to explain in detail the thinking behind the way things are, the various options we tried or considered trying (this works best for things directly under my charge such as Summon, Social media, Chat, etc where I can give a full accounting of things) and then ask "Given the following issues/restraints, what would you do if you were in charge?". 
This often leads them to reconsider, they may not agree the current status quo is the best way to do things but it still helps I think to encourage the idea that nothing is set in stone and everything is amendable to change.

Still, the most interesting questions are those where I don't know the reason why they are the way they are and didn't even occur to me to ask why.

Of course, just because I don't know why things aren't otherwise doesn't mean there isn't a good reason (I don't know everything that goes on!). 

Usually there's a good reason when I ask. 

Sometimes though it's a relic of a procedure/thinking that was passed down through the decades without rethinking.

Years ago, in one of my first library projects studying improvements in the workflow of cataloging caught a certain procedure that was dutifully done for every book ever cataloged.

None of the long time cataloguers I asked could give a good reason why we were doing it. Each speculated a different reason. Eventually it dawned on us that it was a step that was necessary only back in the days before computerization.

But that's besides the point, even if it turns out in the end there is a good sufficient reason,  having your mind opened this way, to see a familiar thing in a new way and wonder why is a great gift.

I believe, the librarians who care enough to always wonder and never stop asking why are the ones who will push the boundaries and make a difference. 

One of the things that I worry after 6-7 years of working in the same place is that I will get too comfortable and stop asking why. That's why it's important to try to force yourself out of your rut and get thrown into new situations outside your comfort zone.


Sunday, December 1, 2013

Library and Blue Ocean strategies (I) - the case of discovery services

As part of a new goal to start reading sources outside the library world for ideas, I have been reading Blue Ocean Strategy: How to Create Uncontested Market Space and Make Competition Irrelevant and I must say it is surprisingly insightful.

For those unfamiliar with the concept, blue ocean strategy contrasts with red ocean strategy, where firms in the industry compete head-on among traditional lines.

They typically compete along price lines or the same traditional factors/attributes that the industry always competes along, competing for the same market user base that may be static or even declining. This results in declining profits among firms in the industry as costs rise fighting for the same consumer dollars.

A blue ocean strategy, aims to make the competition irrelevant. A pithy line in the book says "The only way to beat the competition is to stop trying to beat the competition".

That sounds paradoxical, but only if you define "beat" in a limited way, competing along the usual industry defined lines. The insight (in some ways obvious) is that firms should instead  try to create brand new markets.

One example that is often used is that of Cirque du Soleil. Apparently the circus industry as a whole was shrinking, as kids began to prefer Console games, internet etc to visiting the circus. There was a limited number of famous circus performers (who weren't that famous anyway), so circuses competed to employ them raising the costs. Animal rights activists also made acts with animals increasingly difficult and expensive.

What Cirque du Solei did was that instead of competing along the usual circus industry factors of

i) price
ii) Star performaners
iii) Animal shows


Cirque du Solei changed the rules of the game. They started to blend drama and theater with their acts, blending circus with classic theater. By offering more intellectual experiences, they opened the market drawing in theater going audiences.

Adding of storylines, themes, and increasing the allure of the circus top tent, they created a brand new industry spanning/breaking market across theater and circus with multiple productions like that of broadway shows.

It was not just adding or raising new factors, they also reduced and eliminated attributes that had high cost.

For example, they eliminated animal shows, the need to hire top traditional circus acts, bypassing the whole problem of increasing costs there.

This is what the book calls "value innovation" - innovation that creates more value for consumers but at lower cost, breaking the age-old value vs cost tradeoff.

As the book was written in 2004, some of the examples look dated to my eyes, a more modern example would be Apple of course.

I am still absorbing the framework and analytic tools in the book to employ blue ocean strategies but I think the idea of blue ocean strategies is very important for libraries.

There are differences, libraries don't really compete with each other in the industry but our competitors are Google etc and we don't really have profits but still let's see what we can learn when we apply such tools.

I think like most industries we have always focused on red ocean strategies , basically how to make existing processes better. We are good at tracking how input and output statistics, at doing process improvement processes etc. Increasingly, we do bench-marking studies which focuses more on what other libraries are doing and making sure we do the same.

Red Ocean strategies are important no doubt and they will be always be the bulk of our strategies. But they won't suffice alone.

This is particularly so since our industry is similar to that of the circus industry, where the industry market demand is falling as users start to prefer other alternatives rather than those of ours.

In such a situation, doing the same thing better with incremental improvements isn't going to help. If anything we are in this situation because our competitors such as Google are employing blue ocean strategies on us!

So we need blue ocean strategies to create new ideas and strategies to open new markets, rather than hope to thrive by refining existing processes. We need to know what new things to do, not just how to do existing things better.

The most innovative thinkers in our industry have come up with a slew of ideas from

  • Library as a publisher
  • embedded librarianship
  • Maker spaces
  • Supporting data research management etc
  • Increased focus on information literacy etc
How did they come up with such ideas? Are they viable? In future posts, I will try to apply the various analytical frameworks in the book such as the ERCC grid, six path frameworks and strategy canvas to find blue ocean strategies.

But for now, let me try to analyse a purely business decision, the strategic move to launch web scale discovery services.

As you know, web scale discovery service has swept the academic library world since 2009, with most academic libraries having implemented one or thinking of implementing one.

Summon alone boasts 700 over customers, and others like EDS , Primo Central boast enough more. 

It's fair to say the strategic move to launch web scale discovery service can be considered a blue ocean strategy by Proquest etc creating a brand new market where none existed before.

A good way to look at this is using a strategic canvas, listing the main factors of the industry.

Producers in the aggregator database industry were basically competing on content and feature sets of the interface, and to some extent price.

Signing up content providers to carry their full-text was costly especially from premium names and so was the constant creation of advanced feature sets that required a lot of training to use by corporate trainers and librarians.

At the other extreme, Google was drawing in users, because the ease of use, and large result sets (not all of which was scholarly). 

Web scale discovery is based on the insight that one can provide a industry spanning product which offer the best of both worlds.

The value curve above shows web scale discovery  vs academic databases vs Google.

A good blue ocean strategy exhibits 3 characteristics, focus, divergence and a tagline. 

This blue ocean strategy exhibits focus

It turns out very complicated interface features alienated a large proportion of the user base, so web scale discovery services like Summon dispense with all but a well selected set of features, directing sophisticated users to use traditional databases for more complicated needs. (Other competitors like Ebsco, ExLibris, repurpose existing UI, also reducing cost)

It would have being prohibitively costly to try to shoehorn all the different sources of data into complicated feature sets so this works out nicely anyway.

You see divergence in the value curves of web scale discovery and traditional databases, discovery services raise the factors of ease of use compared to library databases as well as authoritativeness of results compared to Google. Throw in the addition factor of including the library catalogue - "The one search" concept and you have a unique value curve.  

Lastly, the tagline for web scale discovery could have been "Google but for academic content"- a concept that captured the hearts of both undergraduates (the actual users) and librarians (the influencers), creating a brand new market by spanning the web search industry and the traditional library database market.

One can quibble about how accurate this analysis is. For example, is it really less costly for Proquest etc to sign up providers to add content to the discovery index as compared to including the content full-text in the database? (There are well known issues about content providers refusing to provide content to discovery indexes). 

Is the "low price" factor in the value curve above really correct? (Web scale discovery services are often priced very high to libraries depending on the product but I suspect this boils down to the strategic decisions of the major players, the discovery vendors that hope for lock-in may price low).  

Lastly, while web scale discovery could be said to be a blue ocean strategy for Proquest when they initially launched Summon, they were closely followed by competitors and currently, there is bloody competition.....
Still from the point of view of libraries, I think this analysis leaves out the elephant in the room - Google Scholar. Google Scholar is in fact the prototypical "web scale discovery", offering the speed and ease of use of Google with "Scholarly results" and Google got there first.

If once we include Google Scholar in the analysis, arguably - web scale discovery is in fact a red ocean strategy, a direct clash with Google. Is this a battle we (by we I mean the whole library industry) can win?

There are some libraries who think we can't and this brand of thinking where the idea of opting out because Google has won already has become popular lately, 

From the point of discovery vendors though, it was a successful blue ocean strategy because the influencers ie the librarians in the buying process tend to have a mistrust of Google and do not include Google Scholar in their analysis, because it is not something we can actually buy...

But as time passes, the direct buyers, our users may exert a influence that might make us to reconsider.

Of course, the race is now on, how does one craft a blue ocean strategy to create a unique value curve to differentiate between Google Scholar and say Summon?

That's the subject of a future post.

Sunday, November 10, 2013

Is Summon alone good enough for systematic reviews? Some thoughts.

Edit : Read these speculations with caution, actual tests needs to be done. After posting this speculations, I did a couple of actual tests duplicating the *exact* limited searches done for Google Scholar but in Summon, and the results in a few examples (not all) exploded even with restrictions to journal articles + limited disciplines (e.g Medicine), so precision with Summon might even be *worse* than Google Scholar with the very same search statement! 

In other cases, Summon yielded less results than Google Scholar with the exact same search statement but at a big decrease in recall. 

Attempts to use the more advanced search features in Summon to include wildcards and longer search statements not possible in Google Scholar, actually exploded the search even further. 

Even though I am not a medical librarian, I have read with interest the recent paper "Google Scholar as replacement for systematic literature searches: good relative recall and precision are not enough" by Martin Boeker, Werner Vach and Edith Motschall.

The paper

  • translated search strategies used to find relevant papers in past systematic reviews into Google Scholar equivalent search statements (as close as possible anyway)
  • Checked how many relevant papers were found (the papers found in the original systematic review is the "gold standard" of what is considered relevant)
  • Calculated the recall and precision of using Google Scholar as compared to traditional systematic review methods of searching multiple databases (typically Medline, Web of Science, Cochraine Library etc)

The results aren't particularly surprising, as argued by many other papers and blog posts , despite Google Scholar's large nearly comprehensive coverage of studies that allows it to pick up the papers using just one source (93% recall in this paper),  Google Scholar has many weaknesses making it unsuitable for use in systematic reviews alone.  In particular the lack of precision due to lack of advanced search features is a big one.

As I read through the paper, which is the most comprehensive one I have seen detailing the various weaknesses of Google Scholar for systematic reviews, I couldn't help but think how many of the critiques in there would parallel that for Summon.

In the past, I have blogged about how How Google is different from traditional databases and later I mused about How library web scale discovery services in particular Summon are closer to Google and Google Scholar , but not quite there yet.

On one-hand Summon has many of the same characteristics as Google Scholar. With breath unmatched by traditional databases, it was designed also to maximise recall at the cost of precision with features like auto-stemming which makes it feel google like.

But on the hand Summon does have more advanced search features (though a bit well hidden) and stability of results and more transparent sources.

So how does Summon stack up? Let me go through the critiques against Google Scholar and see if they apply to Summon.

Here's a short summary of some issues in Google Scholar.

  • Maximum 1,000 results, 20 results per page - Summon same limitations, 50 results per page
  • No bulk export - Summon same limitation, Zotero allows export of results by page for both
  • Lack of search history -  Summon same limitation.
  • Limited advanced search interface - Summon 1.0 same, Summon 2.0 is better
  • Lack of truncation and advanced field searches - Summon is better
  • Inability to nest logical operators more than one level - Summon is better
  • Limited query length to 256 characters - Summon does not have this limitation
  • Autostemming leads to lack of control - Summon has same limitation
  • Reliability and stability of index - Summon is better with more transparent listing of sources.

Overall : Reasons to believe one could better translate traditional complicated search strategies to Summon which might result in better recall and precision (assuming the full index of Summon is comparable to Google Scholar), but need an actual study to confirm, which will take a bit of expertise to translate search strategies and even more time to look through the results.

But similar to Google Scholar, limitations like maximum 1,000 results, lack of bulk export might make this moot anyway.

For more detail, read on!

Let's start with graphical interface features.

Quotes below are from provisional PDF of "Google Scholar as replacement for systematic literature searches: good relative recall and precision are not enough" by Martin Boeker, Werner Vach and Edith Motschall allowed under BioMed Central Open Access license agreement. 

"Not more than 1000 results of the complete result set can be displayed in steps of 
maximum 20 results per page."

Google Scholar can show 20 results per page

At 20 results per page, Google Scholar stops at page 50 = 1,000 results

Many people are surprised to know that regardless of the number of results Google Scholar finds, you can see at best 1,000 results and it won't show more results. (Google is similar though it's variable in terms of where it stops showing results).

In Summon 1.0, one can increase maximum number of results to 50 per page compared to 20 per page for Google Scholar. But you can't get more than 1,000 results.

At 50 results per page, Summon 1.0 stops at page 20, click "next" and you get an error.

Error when you go past 1,000 results in Summon 1.0

In Summon 2.0, there is no concept of pages, with the so called "infinite scroll" feature, I can't tell if this limits to 1,000 but might be moot (see below)

Frankly if there is any one reason not to use Google Scholar, this one alone would be sufficient, since many searches done would have >1,000 results. Still, let's press on.

"No bulk export of results is available. Results can only  be exported into reference management software (e.g. ZOTERO)"

Yet another killer, since you need to mass export results you get, ideally all results with one export.

With Google Scholar, you can go export item by item, for mass export the only option is to pair it with Zotero so you can bulk upload page by page of results. Unfortunately with Google Scholar you can see only a maximum of 20 per page so it may take a while to export everything. (Wild idea use Publish or Perish software to get everything in one shot? Though limited to 1,000 results plus search limitations?)

Using Zotero with Google Scholar to mass export all the results on one page at a time

Summon 1.0 is exactly the same with no bulk export. You can use it with Zotero, exactly as in Google Scholar, so you can bulk export all the results in one page. Here it works slightly better than with Google Scholar, since you can set the page to display up to 50 results per page as mentioned.

Using Zotero with Summon 1.0 to mass export all the results on one page (50) at a time

Summon 2.0 has mentioned has "infinite scroll". I think this feature pretty much kills the possibility of quick bulk export. Or would I research, keep scrolling down...until the end, and then exports by bulk with Zotero?

Kinda moot now cos zotero does not work with 2.0 currently.

Lack of "a history function which temporarily stores retrieval results for incremental refinement of search strategies"

This is important in systematic reviews of course, gives you more control and makes controlled searches and exploring of search strategies easier, but this is lacking in Summon currently as well. As a sidenote some web scale discovery services like EDS do support this.

"It is not achievable to construct all possible expressions in the advanced search interface 
due to the limited number of available entry fields. Only one field for each type of 
expression (conjunction, disjunction and conjunction of phrases) is available"

Google Scholar advanced search screen

Assuming I understand this objection properly, Summon 1.0 is even worse, you can't even do that in advanced search since all the fields are an AND function.

                                                Summon 1.0 advanced search screen

In Summon 2.0, the advanced search is much improved, with a pull-down menu covering
  • abstract
  • title
  • publication title
  • author
  • date
  • full-text
  • subject term etc
It also allows you to add additional boxes if necessary connected by logical operators.

                                              Summon 2.0 advanced search screen

Some fields of the advanced search interface are not available in a search expression as a 
keyword or field indicator. Whereas authors can be specifically searched for with the field 
indicator ‘author’ in an expression like ’author:“author name”, the date is not accessible by 
a field indicator.

This refers I think to the oddity where you can use advanced search in Google scholar to restrict by date or publication title, there is no equalvant way of getting to it by keyword syntax.

Summon doesn't have this oddity, though in Summon 1.0, the advanced search only gives you limited fields to search with
  • title
  • author
  • publication title
  • isbn/issn
  • date
All of them can be done using by search syntax alone in basic search.

In fact, many other fields can be searches via if you know the syntax, so like for Google Scholar, to get the best use of it you had to construct the long complicated search on a text editor than transferred it to the basic search.

In Summon 2.0, the advanced search is much improved, with a pull-down menu covering
  • abstract
  • full-text
  • subject term
  • doi
  • etc
It also allows you to add additional boxes if necessary.

The paper mentions that the lack of a search expression builder and search history is desirable though can be tolerated by advanced power users. 

So let's start looking at other issues.

"Search expressions were limited to a length of 230 characters due to the restriction of a 
total of 256 characters"

This is a barrier to creating complicated search strategies in Google Scholar as it has a character limit for length of search queries. This is a big deal because many search strategies needed for systematic reviews are extremely long and complicated.

The paper notes the median length of the Medline searches are 777.5 characters. But because of the character limit in Google Scholar, they had to simplify searches and in the study the median length of the "translated" Google Scholar ones had a median length of only 187.5!

This is a big limitation of course.

As far as I can tell by some testing, Summon does not have the same limitation as Google Scholar. If the query is long enough, browser limitations on processing long URLs (typically >2000 characters) will start to come into play, but this isn't a limitation of Summon per se.

"Terms in Google Scholar are complete single words (truncation is not possible)"

Based on support files, Summon allows truncation (but not within quotes) and proximity operators (without taking into account order).

"Google Scholar applies automatic stemming to terms where the stem is recognizable for Google Scholar. However, this mechanism might not be reliable for domain specific language (e.g. the medical language)."

Summon does autostemming too. My understanding is adding quotes around terms in Summon gives a higher relevancy boost  to items with the exact terms in quotes but does not remove autostemming per se. So here it is similar to Google Scholar and may not be as precise as you may want.

"Logical operators can be used, though only without nesting of logical subexpressions 
deeper than one level."

The paper also warns that "correct interpretation of logical connectors" still needs to be improved, with Google Scholar often giving illogical number of results.

As far as I can test, for Summon, you can nest boolean operators to more than one level. Still, there are indications in the support files that imply complicated nested boolean searches might sometimes give odd results and to report them if seen.

A well known example was where adding quotes would occasionally give MORE results, as reported in this article. So for example

sheep dip flies

would give you less results than

“sheep dip” flies

My understanding of the issue was that Summon by default would do an implied proximity matching (within 200 words) for three or more searches terms if they were found in full text in an attempt to filter out totally irrelevant results where words were extremely far apart in the full-text, but would switch this off when quotes were used.

In any case in the latest versions, this issue is resolved.

"The currency of Google Scholar may not be very high for some resources. The update 
period for certain resources is up to nine months. Although research results indicate 
very high coverage of Google Scholar, the exact coverage is not known. Google itself 
states that it does not index journals, only articles, and does not claim to be exhaustive."

I am not aware of any study to measure how current Summon is with indexing (though no doubts libraries evaluating Summon and rivals would have done some testing). That said, Serialssolutions claims to index  periodically, appropriate to the type of material (e.g if the journal is monthly, it will be indexed monthly).

Also Summon claims to index at the journal issue level, so this differs with Google Scholar, and one would expect more consistency here. 

Reliability and stability of search results over time and place is not sufficient

A somewhat related critique of Google Scholar for systematic review searches is how unstable the results are. Due to way Google scholar works by crawling the web (including sites allowed by publishers, institutional repositories and even normal author personal homepages), articles may drop out of the index suddenly when the page becomes unavailable from crawl to crawl. 

"GS’ changing content, unknown updating practices and poor reliability make it an inappropriate sole choice for systematic reviewers. As searchers, we were often uncertain that results found one day in GS had not changed a day later"

Summon and most web scale discovery services would presumably be less prone to this since they don't trawl the web for articles (except for Institutional Respositories using OAI-PMH).

Also while Summon isn't 100% transparent on what is covered in their index as some would wish, they do produce a list that covers by journal title, the coverage and level of indexing (metatdata only or full text).

Difficulty of translation of standard search strategies from Medline to Google Scholar syntax

This pretty much drives the conclusion (see next point)

Google Scholar is extremely limited in terms of what can be done for searches, this results in extremely imprecise searches compared to what can be done in Medline (whether via Ovid or pubmed), Web of Science etc. 

In this aspect, Summon seems to lie in between Google Scholar and other databases

Unlike Google Scholar, Summon can handle 
  • Truncation
  • Proximity
  • Multiple level of nesting
  • Search queries >256 characters
  • More fields for searching including subject terms, abstract (Summon 2.0)
  • Filtering by content types (eg journal articles), discipline (Medical, Economics etc)
But it's still limited compared to the search syntax you can do in Medline, if say you want to use the MESH headings, theasuri etc, Summon doesn't have controlled vocabulary at all for focusing, exploding etc.

On the other hand as noted in an earlier paper critiquing Google Scholar, lack of a Google Scholar "search filtering option to limit the scope of search results ‘by discipline’ such as ‘health and medicine’" can leads to explosion of results.

Summon does have a discipline facet  for medicine, biology etc, though it's unclear how accurate it is to use.

As I am not a medical librarian, I can't tell if Summon's search feature is sufficient, though looking at the discussion in the paper of examples where the translated Google Scholar strategy fails, the main issue seems to be
  • Lack of truncation support in Google Scholar
  • Google Scholar's limited query length restriction
So it's likely Summon's search features might in fact be sufficient since both do not apply to Summon.

Google scholar has good recall but much worse precision 

The meat of the whole paper is this. The results show that as expected that recall is good and 93% of relevant results are retrieved, but due to the lack of precision in searches allows by Google Scholar (see above), this can lead to a lot more effort wading through the results to pick them up. (In fact, the same limitations prevent 100% recall).

"Our investigation suggests that due to the low precision of Google Scholar searches a user 
has to check about 20 times more references on relevance compared to the standard approach 
using multiple searches in traditional literature databases. In the majority of cases this implies 
for checking 10,000 or more references."

I got into a twitter discussion on how even this statement is misleading or at least impractical because Google Scholar cannot currently get to more than 1,000 results. That's a very good point though in defense of the paper it does recognize this calculation as "completely hypothetical".

I feel the paper does make a interesting point here, that is easily missed due to the title.

The low precision of Google Scholar is not necessarily the main argument to avoid using it for systematic reviews!

Why? While you might need to check 20x more references when using Google Scholar alone, compared to traditional systematic review techniques, you save on time in other ways.

For example, traditional methods require that you query multiple databases and translate the same search to different databases. You need to spend time to dedupe results from these different sources etc. All this is additional time, that might off-set the 20x lack of precision.

Using this argument, like Google Scholar, Summon might also still be worth using even if is less precise than traditional methods, it all depends on the numbers.

But how would the recall and precision figures for Summon stack up compared to Google Scholar?

There is in fact reason to suspect, the precision might be better due to the ability to craft more controlled searches, but again without doing a formal study this is not definite. It may be possible, that in fact Google Scholar's better relevancy ranking can offset this, so if one is restricted to only the top 1,000 results (which is in fact true), Summon might be worse.

Also, is there reason to suspect if Summon would yield as high a recall as Google Scholar? Sure the ability to create long comprehensive search syntax would allow one to pick up more papers (eg long list of drug names) but is the index coverage of Summon as good as Google Scholar's?

This is unclear to me, presumably in Summon you would use "add results beyond your library collection" to use the full Summon index and perhaps exclude content types that are not relevant. Combine that with a login to your institution to get in as much A&I content (mostly Web of Science plus proquest A&Is, don't think Scopus results are in yet despite this announcement) and maybe use the discipline facet to further refine down.

But even that would be relatively small index compared to Google Scholar.

I am also unsure if Summon index includes the whole Medline or even Pubmed index or cochrane library etc. 

But again, we need a formal study.


All in all, the results are not surprising, despite lacking many of the Google Scholar flaws, Summon isn't good enough to use alone for systematic reviews though there are reasons to suspect it might allow more precise searches due to Summon's superiority over Google Scholar in terms of the ability to do truncation, nested searches and much longer queries.

The degree to which this is true and/or the degree it will help precision is hard to tell without actually trying to redo the study with Summon as Summon still lacks some features like rich metadata and controlled vocabulary and related advanced search features for focusing/exploding subject terms. 

Still, the following lack in features which mirrors Google Scholar, makes it almost totally unusable even if true that Summon has much better precision and similar recall as Google Scholar.
  • Maximum 1,000 results in Summon 1.0 (infinite scroll in 2.0?)
  • No Search History
  • No bulk export
It is unclear if Summon will add such features though they don't seem particularly hard to do, since like Google, Summon isn't seen as a typical poweruser database, and such features might not be appropriate. 

The interesting thing is, while web scale discovery services are typically similar enough to discuss in a broad brush, in this case, what I discuss for Summon does not necessarily apply to other discovery services like EDS or Primo Central.

For instance EDS does include search history, does not seem limited to the 1,000 result limit and of course has a totally different set of facets and search filters and may have important medicine sources like Medline, Cochrane library databases included. 

BTW If you want to keep up with articles, blog posts, videos etc on web scale discovery, do consider subscribing to my custom magazine curated by me on Flipboard.

Share this!

Creative Commons License
This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License.
Related Posts Plugin for WordPress, Blogger...