Workshop on Human-Computer Interaction and Information Retrieval: HCIR 2011

The HCIR workshop began in 2007 as an experiment to see if there was interest among information science researchers to meet and discuss human-computer interaction (HCI) as it applies to information retrieval (IR), and the experiment has been highly successful. From its beginnings, it has grown until this year, about 90 information science researchers assembled at Google’s headquarters in Mountain View, CA on October 20, 2011 for the 5th HCIR workshop. (As the most heavily used search engine on the Web, there is no more appropriate organization than Google to sponsor a workshop on human involvement in searching and information retrieval.)

According to the workshop website, The workshop unites academic researchers and industrial practitioners working at the intersection of HCI and IR to develop more sophisticated models, tools, and evaluation metrics to support activities such as interactive information retrieval and exploratory search.”

The workshop featured a keynote address, poster sessions, presentations, and a challenge competition.

Keynote Address

Gary Marchionini

One of the highlights of the workshop was the keynote address by Gary Marchionini, Dean and Professor in the School of Information and Library Science at the University of North Carolina. He has had a long and distinguished career in the field, serving as a member of the Editorial Boards of several prominent journals, president of the American Society for Information Science & Technology (ASIST), and chair of several conferences. He is the author of Information Seeking in Electronic Environments and Information Concepts: From books to cyberspace identities. His keynote address “HCIR: Now the Tricky Part”, began with a look back into the history of HCIR and noted that the two pioneers of the field (he called them its “father and mother”) are Nick Belkin from Rutgers University and Susan Dumais from Microsoft Research. He showed this diagram breaking down the history into 3 eras and showing some of the pioneering researchers in each (I was honored to be included as a result of some of my work in the early days of online retrieval at Bell Labs).

  • Pre-1980s: Human and machine intermediaries (human search intermediaries are now largely extinct)
  • 1980s-1990s: Networks, search algorithms, words and links, no human intermediary
  • 2000-present: UIs, facets, usage patterns, social interactions (and involvement of many more people in the search process)

He then presented 3 case studies as examples of HCIR platforms: Open Video, the Relation Browser, and Results Space, from these assembling a list of challenges and evaluation worries: mixing various approaches to HCIR, information seeker behavior, retrieval and extraction, and individual and group interactions. Here are some pertinent questions:

  • How do we assess query quality (often the first indication of user behavior in a search)? We might think this is basic, but it is really quite difficult. Have we advanced the science to be able to say we are doing something better now? How much confidence can we put in query profiles?
  • How do we use search behavior as evidence? Can we match the behaviors to queries?
  • How do we create document surrogates and assess their effects? Surrogates tend to be active in the early stages of the search process.
  • How do we account for information seeker loads: cognitive, perceptual, and collaborative? What s the perceptual load in an environment where everything looks the same? When 2 people work together, they are less efficient. We are not paying attention to the costs of collaboration.
  • How do we measure session quality, search quality, and solutions to problems?

Marchionini concluded that substantial progress in HCIR has been made over the last 30 years (compare today’s search experience with that of searches done on a teletype terminal by a librarian while you waited for the results), but there is still much more to learn.

Poster Sessions

Topics of the posters included collective information seeking, a high density image retrieval interface,

High density image

an interactive music information retrieval system, information needs and behavior of mobile users, search quality differences in native and foreign language searching, and search interfaces in consumer health websites. Links to articles describing the research presented in each poster are available on the workshop website.

Presentations

In the presentation sessions, authors of research articles presented their work, and as with the posters, links to all of these articles are available on the website. Here are the conclusions from a few of the presentations:

  • Chiang Liu

    A study of dwell time (how long a user remains on a website) as a function of task difficulty by Chang Liu and colleagues at Rutgers University found that difficult tasks result in more diverse queries and longer dwell times on search result pages. Users with low knowledge of the search subject tend to be less efficient at selecting query terms; those with high domain knowledge spend much more time on content pages than those with low knowledge.
    :spacer:

  • Michael Cole

    Michael Cole, also from Rutgers, presented his group’s research on eye movement patterns during a search. Eye movement analysis is quite powerful; Cole et al. found a strong correlation between the level of a searcher’s domain knowledge, length of time reading words, and reading speed.
    :spacer:

  • Luanne Freund

    Luanne Freund from the University of British Columbia analyzed document usefulness by genre. People think about genres in difficult ways; labeling them is difficult for searchers; and they do not always agree about usefulness. Freund identified 5 types of information tasks–fact-finding, deciding, doing, learning, and problem solving–and found that usefulness scores vary considerably by task and genre.
    :spacer:

  • Alyona Medelyan

    Alyona Medelyan from Pingar, a New Zealand-based organization, evaluated 5 search interface features in biosciences information retrieval: query autocompletion, search expansions, facetted refinement, related searches, and search results preview. Interface features from several systems were presented to users (without identifying the service); the users were asked to rate the interface. They found facets useful as long as there were not too many of them to choose from, but felt negatively about autocompletion, that it had too much of a “pigeonholing” effect on their searches. The most important thing to them was the content, not the aesthetics of the interface. Facets were useful for searching; the other features were more useful for browsing.
    :spacer:

  • Keith Bagley

    Keith Bagley from IBM raised the interesting question whether concepts from the travel industry could be useful in modeling searching. When we travel, milestones provide reference points along the road. Many searches end prematurely because of user frustration; perhaps searchers could share their “road maps” to success with others.
    :spacer:

  • Gene Golovchinsky

    Gene Golovchinsky from FX Palo Alto Laboratory, Inc. studied collaboration in information seeking using the Querium system. He said that just because people talk about a document does not mean that it is useful. Systems have been developed that automatically flag documents that have been used in relevance feedback or queries that returned many useful documents. But are these enhancements useful? Is it appropriate to share results automatically? Does this kind of feedback produce better retrieval results despite users’ initial impressions?
    :spacer:

HCIR Challenge

The workshop organizers conducted an “HCIR Challenge”, in which search engine developers were asked to use a set of over 750,000 documents in the CiteSeer digital library of scientific literature to answer several questions focusing on the problem of information availability: when the seeker is uncertain as to whether the information of interest is available at all (for example, in a patent search).  Details of the challenge and the questions are available on the workshop website.

Four teams took up the challenge:

  • The L3S team used its faceted DLBP system to solve questions 1 and 4. Their system shows facets along with the retrieved references. Clustering is based on titles of results.
  • The second team used the Querium system on questions 1 and 2.
  • The VisualPurple team used its GisterPRO system to answer questions 3 and 4. Gister does cloud-powered exploratory searches of unstructured data. It was developed for analysts who need to do hard searches in short times and visually searches a databases. The only operation available to the user is quoting to construct phrases; there are no Boolean operators.
  • A team from Elsevier labs demonstrated their query analytics workbench to answer questions 2 and 5.

The challenge winner was chosen by majority vote from members of the audience not involved in the challenge. The vote was very close between the Querium and Elsevier systems. Querium won by a narrow margin.

Future of the HCIR Workshop

The HCIR workshop has clearly been a successful experiment.  It provides a unique venue for researchers in the field to discuss their results in an informal setting.  As it grows, decisions will need to be made as to how to guide it in the future, and a upcoming attendee survey will provide some useful input.  Personally, having been one of those researchers in the past, I hope it will continue.  It was a highly useful and stimulating experience; research in HCIR is making great strides.  We can expect significant improvements in search engines in the future as a result.

Don Hawkins
Columnist, Information Today and Conference Circuit Blog Editor

 

Comments are closed.