Archive | Charleston Conference 2010 RSS for this section

Charleston Conference Wrapup

I just received this e-mail with some final details about the Charleston Conference:

The Windup

And so another Charleston Conference came to a close.  Attendance surpassed all previous records–about 1,350.  You can view a timeline with all the photos shown at the opening session and high quality photos of the posters commemorating past conferences here.

Many of the speakers’ presentations will be available on Slideshare, and summaries will be published in Against the Grain in its next few issues.

The 31st Charleston Conference will be on November 3-5, 2011, preceded by the Vendor Showcase on November 2.

Don Hawkins
Columnist, Information Today and Conference Circuit Blog Editor

Another 30th Anniversary Celebration

One of the notable features of the Charleston Conference are the extremely entertaining skits presented by some of the attendees.  This year, the skit showed a look at conference planning over 30 years and featured a young Katina Strauch in some of her organizational tasks.  Here are some of the photos that we saw during this event.  Enjoy!

Don Hawkins
Columnist, Information Today and Conference Circuit Blog Editor

Creating a Trillion-Field Catalog: Metadata in Google Books

Jon Orwant

Those who stayed for the last plenary presentation on Friday enjoyed a treat.  One of the most interesting and fascinating presentations of the conference was by Jon Orwant, Engineering Manager on the Google Books project.  The Google Books is in accord with Google’s mission to organize the world’s information and make it universally accessible and useful.  So far, Google has scanned about 15 million books, about 10% of those available.  This amounts to about 4 billion pages and 2 trillion words.  Google collects metadata from over 100 sources, parses the records, creates a “best” record for each data cluster, and displays appropriate parts of it on the site.  Problems are encountered with inconsistencies, particularly with multi-volume works and languages using non-Roman character sets.  One might think that ISBNs would help, but they are far from unique; in fact, ISBN 753305353 is shared by 1,413 books, and 6,000 ISBNs are associated with more than 20 titles each!  Google has scanned books in 463 languages, some of them used in only a small area and some which are no longer used.  There are even 3 books in the database in Klingon!  (Don’t try to search for them–many of the languages do not appear in the dropdown box on the Advanced Search page.) Books in many of the unusual languages have come from Christian missionaries as a result of their evangelical work.

Google has developed special handling methods to scan books from libraries without damaging them and also uses sophisticated algorithms to identify textual areas, images, tables, etc.  They try to understand the various parts of a book so that all the pages can be tagged.  Once the books have been digitized and run through optical character recognition, a large corpus of data is available for searching, but also other interesting purposes.  Using their well known 20% “free” time, several Google engineers have developed fascinating applications, such as a mashup with Google Maps showing all place names mentioned in a book, insights into human knowledge such as language changes over time, and publication rates of book subjects as a function of publication date.  Google even makes grants available to scientists and linguistic analysts to do research projects because they consider books as a corpus of human knowledge and a reflection of cultural and societal trends over time.

Don Hawkins
Columnist, Information Today and Conference Circuit Blog Editor

The E-Brarian Revolution: Collapse of Traditional Libraries and the Dawn of the New E-Empire

The future of libraries is a burning question, and sessions on it occur at conferences with great regularity.  The E-Brarian Revolution panel looked at the following questions, and offered some fascinating observations:

  • How will technology affect the future of librarians, publishers, and their offerings?
  • Will print collections be completely replaced by electronic ones in the next 20 years?
  • Will librarians as we know them no longer exist?
  • How will patrons and students use libraries dependent entirely on electronic resources?
  • What does the road to entirely digital look like, and what are publishers doing to set the pace?

Mehdi Khosrow-Pour, CEO of IGI-Global, moderated the panel and thinks that printed books will always be available because smaller libraries still cannot afford databases, people like the library experience, and it is impossible to replace serendipity possible with physical browsing.  He presented the results of a survey of 627 students on US campuses, in which 76% of them said they would pick up a printed book if it were available to them.  Mirela Roncevic, Editor of Advances in Library and Information Science, offered more data to support this, noting that we continue to talk about standardizing content formats, content may be born digital but not globally available, and there are still many old business models reinforcing inefficient practices.

Lynn Connaway, Research Scientist at OCLC, reported on one of her fascinating studies and said that libraries must build services around user workflows and provide seamless access to both printed and digital materials.  She thinks that librarians will become responsible for digitization, preservation and archiving of resources as well as educating users on their information needs.  At present, users spend little time using content and tend to download much of it for printing and reading later.   If we focus on the user, many of our problems will be solved.

Rick Anderson is haunted by the iPod.  It destroyed the music industry and is revolutionizing the communication industry because it became the iPhone.    In the 1990s, the information industry similarly did not anticipate what the web would do to scholarship.  We need to ask ourselves what is happening now that will radically redirect our industry.  What will we mean when we say “library”?  It won’t be a building full of books, but could become a collaborative research space or a central repository of scholarship and local collections.  Large databases of information are becoming available, and we risk being taken by surprise if we don’t pay attention to Google Books and Hathi Trust.

The library building will still be very important in the future.  Gate counts continue to rise because people love the library space.  They love to be able to work in groups.  For example, Anderson has observed at his University of Utah library that students like to study in groups and will often rearrange the furniture to accommodate their habits.  He said that no librarian should ever say “Shhh” to students; demand for space for collaboration has far surpassed that for quiet study.  (If they need a quiet space, it can be provided.)  He also said that the biggest competition is the student union.

One of the fastest growing sectors in the information marketplace is e-books and digital content, according to Kevin Sayer, president of ebrary. We are competing for students’ attention with the resources available on the web, primarily Google.  They are spending more and more of their time online.  They know that they have digital resources available to them, even if they don’t use them.   Librarians therefore need to cost-effectively and efficiently acquire the information students need, improve discoverability to help them find it, and manage large amounts of data from multiple platforms and vendors.  Publishers can help by offering electronic access simultaneously with the print (some already do this and even make the electronic version accessible ahead of the print), provide flexible pricing models to meet library budgets, and leverage new technologies for information distribution and delivery.

Don Hawkins
Columnist, Information Today and Conference Circuit Blog Editor

When the Rubber Meets the Road: Rethinking Your Library Collections

Roger Schonfeld

Sue Woodson

Roger Schonfeld presented the results of a survey done by his employer, Ithaka Strategy and Research, on cancelling print subscriptions and replacing then with electronic.  Sue Woodson, from the Welch Medical Library at Johns Hopkins University followed up with a description of her experiences in replacing print collections with new research support services.

Schonfeld reported that most academic faculty members depend heavily on access to databases and, to a lesser extent, e-books in their research.  There is a growing perception that print collections are no longer used, and library administrators are pressing for their removal as a cost-saving measure.  Librarians are seen as playing a more vital role in the lives of their users rather than as custodians of printed materials.  However, despite the optimism for e-only collections, the reality is that many books and journals are not in electronic form yet.  In a report to its clients, Ithaka recommended that a 20-year timeline for a complete transformation seems appropriate, and backup copies of many materials may always be necessary.

Woodson described how the Welch Medical Library is in the process of vacating its building and moving into much smaller and more remote quarters.  Some of the questions that had to be addressed were:

  • How long do we need to keep “some” copies of our journal literature?
  • How many copies need to be kept?
  • How do we coordinate distributed preservation?
  • What becomes of print when it is no longer valuable to Medicine?

These are all important questions, not only for medical libraries, but for any library contemplating moving to an all-electronic environment.

In the future, it will emphasize remote services to its users.  The new service mantra is shown here:

Research Wherever You Are

Don Hawkins
Columnist, Information Today and Conference Circuit Blog Editor

Stewardship of the Scholarly Record

Brian Schoettlander

Brian Schoettlander, Audrey Geisel University Librarian at the University of California, San Diego, kicked off Friday’s plenary sessions with an in-depth review of the scholarly record, how it has changed in today’s online environment, and its stewardship (careful and responsible management).

In a 1990 article in Library Resources & Technical Services, Ross Atkinson defined the scholarly record:

The Scholarly Record, according to Atkinson

He also drew heavily on a report by Maron and Smith, Current Models of Digital Scholarly Communication, who defined the types of scholarly resources.  As recently as two years ago, electronic books were not on the list, but they probably would be included today.

We used to  know well how to manage scholarly records, but once data sources began to emerge, the concept changed because data sources are less stable.  Blogs, discussion forums, and professional and academic hubs have been added to the list of types of digital scholarly resources.  These are unstable and emergent, and it is not clear whose responsibility it is to steward them.

There are many stewardship models (do a Google Scholar search and also see articles in the International Journal of Digital Curation to find articles on them).  They all  conceive of stewardship as an ongoing series of activities involving multiple dimensions–information types, format types, process types and, actor/stakeholder types.  Each of these dimensions is multidimensional, which results in a long list of things that needed to be attended to in stewardship.

Here is Schoettlander’s stewardship model for the scholarly record:

Stewardship Model

The merger of digital and print has been a disruption of traditional activities, which has led to uncertainty about what should be stewarded and who has the responsibility to do it.  In the analog world, we could just steward outputs because they were not coupled to their inputs.  In the digital world, that is not the case, but the input and output don’t need to be decoupled.  This calls for a much more expansive view of what is encompassed by the scholarly record and who the stakeholders are.  Librarians are the natural stewards of the scholarly record because of their expertise in curation.

Don Hawkins
Columnist, Information Today and Conference Circuit Blog Editor

The Role of Reference in the Open Web

It is well known that students generally start their searches on Google, Wikipedia, or similar sites.  In fact, one study by OCLC has shown that 89% of students’ searching activity is done on open websites, despite their need for information not available on those sites.

A panel led by John Dove, President of Credo Reference considered the implications of this situation for reference services and further, how can publishers and aggregators collaborate with open web players to the benefit of libraries and their users?

Reference service has been defined as a type of intermediary between a person and a body of knowledge they want who can facilitate access to the knowledge.  It can be thought of as a good filter.

Is “institutionally sponsored reference” dead?  According to pessimists, reference rooms, desks, and interviews will soon disappear, as will reference works in print.  Pessimists point to Google’s vision (no intermediary between user and knowledge–except Google,of course!).

Key questions to be asked are:

  • Why can’t we create digital intermediaries for reference that are programmed by librarians?
  • What would you want to control if you could affect the online life of your students?
  • Where do students get stuck?  (almost 2/3 of respondents say students don’t have the right vocabulary to be able to search resources)

Reference is not dead, but the user has moved–they’re in a new place.  We have to put the resources they need under their noses when they’re not asking for it.

In contrast to pessimists, optimists reply that user needs are higher than ever, and the needed content already exists.  Although users have moved, the technology exists to move with them.  The outlook for reference is better than ever, if and only if:

  • Open web players pay attention to libraries.
  • We meet users at their point of need.
  • Content is provided in context.
  • Librarians and vendors collaborate.
  • Each step in the reference process enhances information literacy.

A new knowledge delivery system is maturing. Layers of authority are emerging, and they reinforce each other.  Reference is not dead–resources are there to provide users with disciplinary boundaries, but librarians must asset their disciplinary knowledge to leverage reference. If we can move with users and build a bridge to where they are, we can increase information usage.  Reference has a tremendous potential.

An online reference service provides:

  • Discovery:  visibility into the library, resources, and access to librarians’ expertise
  • Context:  overview, summary and vocabulary of a topic from multiple perspectives
  • Connection:  seamless integration with relevant resources chosen by your library
  • Innovation:  strategic use of technology.

We need a “North Star”–a guide to where we are going.  Online reference should be bridging between a free website and library sponsored references and tools and do it transparently to the users–a marriage of new technologies and librarians’ expertise.

Click here to join a conversation on reference services, explore best pracctices, contribute ideas,and  hear the latest about student research behavior.

Don Hawkins
Columnist, Information Today and Conference Circuit Blog Editor

EBSCO Discovery Service (EDS) vs. Serials Solutions Summon Faceoff

One of the most eagerly awaited sessions of the Charleston Conference was a faceoff between two prominent discovery systems: EBSCO Discovery Service (EDS) and Serials Solutions Summon.  It came about in response to a proposal from Serials Solutions Vice President Stan Sorenson following an exchange of Letters to the Editor of The Charleston Advisor (see my previous post for further details).

A large crowd gathers for the Faceoff

Faceoff Panel (L-R): Sam Brooks, Michaell Gorrell, Tim Bucknall (Modertor), Mike Buchsman, Jane Burke

The faceoff took the form of a series of questions from the Moderator, with each company given an equal amount of time to reply.  After the questions, live demonstrations of each system were conducted.

Here are the questions and answers:

Why do libraries need discovery tools and how does your product meet those needs?


  • Libraries must find ways to get more value from collections.
  • Collections are mostly digital now, and libraries want to rekindle their brand.
  • Discovery services have come into their own.
  • Summon is an essential element of libraries’ mission statements because it helps users find the full breadth of library’s collection and because a single search box critically important to users.
  • Libraries need discovery tools because users want a single search box for everything.  This is the only way to compete with Google.
  • A discovery service must leverage advantages offered by libraries
  • Catalogs have benefits but don’t have an ideal interface.
  • Federated searching is far too slow for today’s user and not uniformly indexed/ranked.
  • End users should not have to know database names or be required to sift thru long list of resources
  • Discovery services offer lots more opportunity to get a lot more out of indexes.

Why should a library choose your service rather than that of a competitor?


  • Depth of coverage is very important.  EDS has more sources.
  • The EDS service is very thorough.
    • It offers a seamless way to incorporate subject indexes through “platform blending”.
    • It can be incorporated with no downside and improves results and usage and record views go up dramatically.
    • Subject indexes can be incorporated even if they are not covered by EBSCOhost.
    • There are many unique features–widgets, search history, composite book records, etc.
    • Full text searching can be done from most journals and subject indexing, which leads to the best relevance ranking.


It meets user expectations:

  • The service is as easy and fast as Google.
  • It delivers all results in a single index.

It is comprehensive.

  • A single unified index takes advantage of the full breadth of a library’s collections.
  • In development, we knew that federated search was available but it doesn’t meet user needs.
  • It offers a unified index across all library collections and is content-neutral: there is no publisher bias.
  • It is Unicode compliant so it can handle sources from anywhere.
  • It has a recommender feature for users.
  • It was built from open source software and made to be scalable.

It has proven value.

  • Summon has been available for more than 2 years, with proven reliability.
  • It has scaled easily.
  • It is hassle free and easy to bring up in short time.
  • It proves its value in actual measurements and results in exponential increases in database usage.
  • It is customizable and configurable and can stand alone or put in library’s web presence in whatever way library wishes.  It can be the library’s digital “front door”.
  • Users don’t need to log in or authenticate until necessary.

Summary and rebuttal


  • The federated search portion of EDS is optional, as is extra time for setup, so EDS’s primary setup takes approximately the same amount of time as Summon’s.
  • End users are exposed to best sites, so the view of unauthenticated users is different from authenticated ones.
  • In response to criticism that EDS is publisher-biased, any system will be biased based on the metadata of its articles


  • It has the “finest subject indexing” from many publishers.
  • Many publishers gave given Summon the full text to use in its indexing.
  • Has match/merge capability which allows it to create the ultimate rich metadata record.  People now come to Summon to get metadata.
I just became aware of another viewpoint by Carl Grant, Chief Librarian of Ex Libris, questioning their exclusion from this faceoff, branding it as “silliness”, and suggesting some questions to ask of the participants.  Read it here on his blog.

Don Hawkins
Columnist, Information Today and Conference Circuit Blog Editor

Working Well With Wikipedia

Phoebe Ayers

Phoebe Ayers is a reference librarian at the University of California, Davis, and she is the first librarian to be elected to the Board of Trustees of the Wikimedia Foundation. Wikimedia’s vision is to create a world in which every human being can freely share in the sum of all human knowledge.

Wikipedia is available in 207 languages, 96 of which have over 10,000 articles.  All articles are written in each language; they are not simply translated.  With 15 million articles, Wikipedia has become the most used reference and information site on the Web, by far.  All those articles are written by readers and are freely available under a Creative Commons license.  There is no top-down editorial control; everyone is free to contribute.  Articles must be free, written from a neutral point of view, not original research, verifiable, and on a notable, encyclopedic topic.

Librarians can help to improve Wikipedia by:

  • Adding references,
  • Improving articles (acting on “citations needed”, cleanup tags, etc.),
  • Look at the category of  “all pages needing cleanup” and making some of the corrections identified there,
  • Add links to unique collections in libraries, or
  • Join a project (see the pages for WikiProject libraries).

Several libraries have donated large collections of content and have formed partnerships with Wikipedia.  For example, the German Federal Archives (Bundesarkiv) donated thousands of articles.

Librarians are encouraged to participate in a survey, available here.

Don Hawkins
Columnist, Information Today and Conference Circuit Blog Editor