Archive | IL 2006 RSS for this section

Keesing’s—a Boutique Service

Cruising the exhibit hall today, I came across a familiar name from the past—Keesing’s. Years ago I used Keesing’s Contemporary Archives in print (which became Keesing’s Record of World Events in 1987). I stopped to chat with publisher Jonathan Hixon and client services representative Jennifer Vancura. They were excited to give me a sneak peak at the new full archive service the company plans to launch December 1. The new World News Archive is a Web-based database comprising nearly 100,000 articles published since 1931. The company still stands upon its strict editorial controls and reputation for accuracy. Hixon said, “We’re a small boutique information service that does one thing well.” He stressed that they are writing “for the record.” I’ll take a close look at the new service in a NewsBreak when it launches.

Paula J. Hane
News Bureau Chief
Information Today, Inc.

Getting More and More Mobile

Did you know that the number of mobile phones now surpasses the number of land lines?It’s time to start thinking seriously about how to serve these mobile people.

Today’s Track B, called Hand-Held Mobile Information, has been packed with interesting tidbits about how to optimize content for mobile devices (hand-held computers, cell phones, etc.) and who’s already done it.

One thing to understand, according to Megan Fox from Simmons College (who’s on the organizing committee for this conference), is that mobile users don’t just want a shrunken version of your usual web site — “They want just the key nuggets.” Indeed, regular web sites are way too clunky to use on these tiny screens. Also, apps that use Flash and JavaScript often get stripped out when content gets optimized for mobile devices. You need to be aware of many such things if you’re trying to mobilize your content.

I saw many people using mobile devices during this show. And I heard the soft tapping of a lot of various-sized keyboards during this track’s sessions!

Kathy Dempsey
Editor in Chief, Computers in Libraries magazine
Editor, Marketing Library Services newsletter

Wednesday’s Keynote Was ‘Grantastic’!

Wednesday morning’s keynote was a lively talk by Shari Thurow, webmaster and marketing director for Grantastic Designs, Inc. She told the huge crowd how to optimize their web sites for search engines. Here are her five basic rules for good web design:
1. easy to read
2. easy to navigate
3. easy to find
4. consistent layout & design
5. quick to download

One thing she emphasized was that each page should have a well-designed, descriptive title tag.

Also, to rank well in search results, "Use your keywords frequently, but don’t irritate the bejesus out of people." Indeed.

Kathy Dempsey
Editor in Chief, Computers in Libraries magazine
Editor, Marketing Library Services newsletter

Monterey Farmer’s and Street Market

Every Tuesday, a farmer’s market and street fair occurs just a block from the conference center. It’s a fun place to go after a long day in sessions. You can find all sorts of merchandise: clothing, candles, jewelry, and craft items. In addition, the fresh produce looks fresh and very colorful. And you can get a meal there as well. Here are a few scenes from yesterday’s market. Keep the market in mind for next year’s Internet Librarian conference.

Don Hawkins
IL2006 Blog Coordinator and Columnist, Information Today

The Embedded Librarian

My newest favorite phrase is “Embedded Librarian.” That’s how Christopher Connell described library staff members at the Institute for Defense Analysis (IDA) who become part of project teams. Chris was the presenter at “Information Skills and Enterprise Collaboration” this morning. Chris described the scenario at his workplace where most work is done by teams who work together in an electronic collaborative workspace, Microsoft SharePoint portals. Librarian team members bring to that space their particular set of job skills—the research, selection, organization, and delivery of content. He reported that at IDA librarians often have improved project outcomes, and reduced the amount of time it took for a team to complete its task. “Hello! Guess who was just appointed to her first project team a week before attending this conference?” That would be me, and while I was excited to become part of a project team at Unisys, I am now even more excited at the prospect of being able to report to my team members that having an “embedded librarian” has been done before in an enterprise environment, and that it works. Internet Librarian delivers again!

Pat Feeney
Unisiys Corporation and Infotodayblog Guest Blogger

Grilled to a Turn and Nicely Done

In what has become an annual tradition, Tuesday evening’s session proved lively and controversial discussing the topic of "Scholarship on the Web: Flying High or Free Fall." The three panelists — Anurag Acharya, the father of Google Scholar; Jay Girotto, leader of Microsoft’s Windows Live Academic Search effort, and Juris van Rossum, the new head of Elsevier’s Scirus — faced an audience of librarians who had questions to ask and issues to raise. Rich Wiggins of Michigan State University held the panelists to answering each and every question. He even asked them which source of the three represented each would use for medical searches if their sister or brother were very ill. All three gulped and proclaimed truthfully that they would use all the sources they could find.

How much would you care to bet that all three organizations have new and/or expanded policies toward scholarly blogs in place within six months? I’ll give you odds, if you insist. That issue jumped out early and often. Controlled vocabulary was another theme held by a strong and vocal contingent in the audience. The panelists explained different factors that influenced inclusion of sources and relevance ranking of results.

Clearly the audience recognized that scholarly search engines were playing an ever increasing role in the lives of students and faculty, a role that would only grow in years to come. The last question of the evening emphasized that point. It asked what the audience and the librarian community could do to insure that scholarly search engines would grow into tools that could satisfy the needs of users everywhere. Help with standards, both their development and their use. Bring publishers into the fold, filling gaps. Identify the blogs that matter. Bring us your students and faculty.

Time’s up. Cookies and refreshments in the foyer. Rats! We were just getting started!

From left to right: Anurag Acharya, Jay Girotto, Juris van Rossum

Barbara Quint (the virtual presence on the phone)
Editor of Searcher

All Mashed Up

I spent the day yesterday hearing all about mashups. Last year, everyone was interested in taxonomies and folksonomies. This year, it’s mashups. Speakers in the Integrating Content track looked at mashups from a variety of viewpoints, and judging from the crowded room for most of the day, interest in them is very high.

Darlene Fichter from the University of Saskatchewan got the day rolling with an introductory presentation. The term “mashup” came from the music industry, when people started creating their own applications such as combining a vocal music track with an instrumental one. Other applications quickly followed, and now mashups are becoming widespread.

A mashup is a Web application that uses content from one or more sources to create a completely new application. The content is typically obtained from a third party via downloading or an RSS feed and is created using an Application Program Interface (API). Tools for creating mashups have been developed, and by using them, one can literally create a custom Web application in five minutes.

Mashups give individuals the freedom to innovate and put application creation into their hands without needing the services of a developer. They are like a piece of Lego—-by itself, one block is not very useful, but when integrated into a structure, it becomes important. One can consider mashups to be lots of small pieces loosely joined.

Many mashups are trivial, but they show the potential for the technique. The vast majority of them use the Google maps application to plot geographical data. Some applications already available include:
Housing maps using Craigslist plus Google maps
• Display of Zip codes on a map using Census bureau data
• A route map of delivery routes
• Plot of the location of breaking news stories by type of story
• Location of earthquakes plotted on a map using US Geological Survey data
Crime locations in Chicago using a Police Department database
PlaceOpedia: Wikipedia articles and their locations
Group maps for online communities
WeatherBonk: Maps and weather data from personal and national weather data
BookBurro: Uses data from online bookstores and library data to tell the user which libraries have a book and its cost at several bookstores
BashR: Wikipedia articles and photos from Flickr

According to the Programmable Web, there are 1,105 mashups available today, with an average of 2.72 new ones being added every day. The Programmable Web maintains a matrix of what has been combined with what.

Creation of a mashup requires obtaining a developer token from Google or another site so that the API can be accessed. Community Walk is a simple creation tool that walks the user through the steps in creating a mashup. Use of the Google Maps application requires obtaining the latitude and longitude of the spot to be plotted, and there are databases available to facilitate this process. Other map builders include YourGMap and MapBuilder.

Darlene listed some of the technical issues with mashups. They are in their infancy, so some of the tools fall short of the ideal, and there are scale and dependency issues. The permanence of the underlying data sources can be a concern. It is important to be aware of intellectual property issues because you might not have the “right to remix”, and you might have to pay for using the data. There could be a dark side to mashups too, and if they are used to identify individuals, privacy issues come into play.

Tom Reamy of the KAPS Group wondered if mashups are a revolution and concluded that they are not, even though some people say that they are revolutionizing Web development. His opinion is that they are simply data integration. He pointed out that mapping data is an old idea, the current focus on technology is misplaced, and also noted that mashups need taxonomies and metadata, which are certainly not new ideas. Tom feels that mashups are still in the realm of “cool” and suffer from irrational exuberance.

Mashups can be thought of as a variant of faceted navigation, or dynamically mapping two dimensions together. Reamy suggested that we need to move beyond individual mashups to a platform for integration of a variety of dynamic sources. Mashups within the enterprise can be profitably used to integrate internal content with public Internet content. Geography is an early application for mashups because there are existing standards making it easy to develop mapping applications.

Reamy’s conclusions are that we need a for mashups, or a community to provide ongoing ranking of them. We need simple APIs to enable social collaboration, content structures such as metadata and taxonomies, so that we can use and build on content aggregation and faceted navigation.

John Blyberg from the Ann Arbor District Library and winner of the Talis “Mashing Up the Library 2006” competition discussed mashup applications, particularly for libraries. Advantages of mashups include:
• No advanced coding skills are needed.
• They provide instant gratification because results are instant.
• The results can be striking and elegant in presentation.
• They are a more involved and enlightened use of the Internet and are therefore part of the Evolving Web. By allowing machines to swap data, the world will become a much smarter organism. The era of Web Services is really here.
• They are an Internet tool for the proletariat and shift power to the users.

Blyberg was very emphatic that libraries not only can create mashups, but they must. Some library applications are:
• Lists of the most popular books (click here for an example)
• Electronic signage. At the Ann Arbor library, a list of the most popular books (number of requests and copies in the system) is displayed on a large screen as people enter the library.
• Cover images of new books (see Ed Vielmetti’s Wall of Books (Superpatron))
• Creation of customized Google home pages using circulation data to show items a user has checked out, due dates, etc.

By letting the public create mashups from the library’s data, a sense of stewardship is created and a potential brain trust is created. Innovation is encouraged, and high quality feedback is obtained. It can be promoted as a library service and will permit people to be part of the organically growing Web.

Chris Deweese, a developer at the Lewis & Clark Library System, demonstrated how to make a mashup from Google Maps, which is one of the easiest APIs to use. To get a Google Maps API key, click here. The documentation is available here and is very useful.

The day concluded with an illustration of some of the available mashup tools. Justine Wheeler, Data Librarian at the University of Calgary reviewed some commonly used data resources. There are two different types of data. Microdata is raw unprocessed data down to the case or respondent level (mashups with only raw data are sometimes called “dashups”), and aggregated data has been summarized. Many of the available data sets are huge, and need a data extractor to obtain exactly the desired data. Justine’s lists of resources will be available on the conference Web site.

It is important to check the conditions on use of the data. Just because you can create a mashup, are you allowed to do it? It is universally forbidden to use microdata in a mashup to try and identify someone. Many data set producers provide an RSS feed to let users know when they have uploaded new data, and most of them also have a “codebook”, or documentation describing the data and how it was produced. Always read the documentation—with power comes responsibility!

Finally, Kathy Greenler Sexton, Chief Marketing Officer, HighBeam Research, illustrated how she uses mashups in her job. She uses Netvibes to create them and creates a personalized home page to track news, blogs, social information, and her personal e-mail. She also noted that Google has recently built SearchMash, a search engine for mashups.

The “mashup day” was not only fascinating and interesting, but it was also extremely educational. Mashups will certainly play a part in our Internet experiences, and even though they are still in their infancy, we can expect to see them become prominent as individuals and commercial organizations take advantage of their power.

Don Hawkins
IL2006 Blog Coordinator and Columnist, Information Today

Talis Presents Awards for Mashing

Just before the keynote address on Tuesday morning, Paul Miller, Technology Evangelist at Talis (isn’t that a great job title?) presented the awards for its competition Mashing Up The Library 2006. The competition, which was announced in June, was intended “to openly encourage innovation in the display, use, and reuse of data from and about libraries.”

The First prize of $2,000 was awarded to John Blyberg of Ann Arbor District Library in Ann Arbor, MI. His entry, Go-Go-Google-Gadget, “showed how simply library information can be integrated into the personalized home page offered by Google.” Blyberg spoke later that morning in the mashup track.

The Second prize of $1,000 was awarded to the Alliance Library System in East Peoria, IL, and its global partners in the Second Life Library. Here, accepting the award from Miller are Michael Sauers, Lori Bell, and Tom Peters. See my blog entry earlier about the project.

More information about the competition and the 18 entries it received is available here. There’s also an archive file with some screen shots of the winning entries.

Paula J. Hane
News Bureau Chief
Information Today, Inc.

Where It’s At

As if all the stimulating content and entertaining speakers aren’t enough to make you want to attend this annual conference, take a look at where it’s at!

Standing across the street from the show, the Marriott is on the left, the Conference Center is (oddly enough) in the center, and the Portola Plaza Hotel is on the right.

Walking 2 minutes in one direction puts you on Fisherman’s Wharf, with its wonderful restaurants and shops right on Monterey Bay. Walk out of the Conference Center in the other direction and you’re right in colorful downtown Monterey.

Highs have been 65 to 70 degrees with plenty of sunshine.

Don’t you wish you were here? Have I convinced you to register for next year’s conference?

Kathy Dempsey
Editor in Chief, Computers in Libraries magazine
Editor, Marketing Library Services newsletter

The Challenge of the “Cyberinfrastructure”

Tuesday’s erudite and thought-provoking keynote by Clifford Lynch, Executive Director, Coalition for Networked Information, was a marked contrast to Monday’s entertaining one. Cliff opened by assuring the audience that he had no intention of breaking into song. Fortunately, he interpreted his lofty title as shown in the program, “Challenges of Cyberinfrastructure and Choices for Libraries” for the audience to mean how teaching, scholarship, learning are changing in very profound ways and the resulting implications for libraries and information professionals who do not necessarily work in library settings.

In most of the world outside the US, one hears talk about “e-science”, the practice of science as transformed by high-performance computation, analysis, or modeling; high-performance networking; and access to people through broad-area communications, all of which provides widespread access to expensive and unique research equipment (the Hubble telescope, large linear arrays, etc.). This process permits large-scale management, reuse, and compilation of data. A US government commission studying the corresponding process in the US coined the term “Cyberinfrastructure”.

Lynch cited national virtual observatory projects in various nations as a good example of the cyberinfrastructure. The data are contained in large sky survey databases that can be accessed and used as a “virtual telescope”, so there is no need for physical astronomical observations. One needs only to run the key parameters against the database to study the data. People with no access to the sophisticated telescopes, such as high school students, could also conduct astronomy experiments.

Most of the initial applications of the cyberinfrastructure were in the sci/tech area, but more recently, the same debates on digital study have begun to occur in the humanities disciplines. Some of the issues are: How do we get data reused and preserved? How do we help scientists structure their data? Museums and libraries have begun to enter these discussions and are launching digitization projects. Special collections (papers of key people, institutions, etc.) are very important for research in many areas of humanities. These collections are changing and becoming digitized. We must deal with how people not only practice scholarship but how they approach life.

Many scholars work from a small collection of documents which they study intensively. Now scholars cannot deal with the scope of available records because there is not enough time to read it all. Corporate litigation works on the same scale so information retrieval and data mining techniques are increasingly used to find relevant information.

Fifteen years ago, most information professionals worked for centralized IT departments. Now, more than half of them are found in departments and on research teams, closer to the end users. Scientists are becoming more interested in sharing and reusing data and the importance of its preservation, so they are turning to information professionals to help them in these tasks. Funding institutions are starting to realize that there is value in data, and they are starting to require grantees to include a data management and sharing plan in their applications.

Who will be satisfying these demands? Lynch envisions a new professional, a “data scientist”, emerging. What do they need to know? Will they be librarians or researchers? What expertise will they need? Will “data scientists” be working in libraries, schools, or research laboratories?

Libraries are constantly struggling with limited acquisitions budgets. In the physical sciences, the main role of libraries has been to pay for journals. But journals are now electronic, and many scientists think that e-journals are free, so they do not have a close relationship with their library. So libraries must change along with these major environmental shifts. They will have to deliver new services that do not fit well with today’s existing institutional infrastructure. The rise of amateur observational science is an important development. Libraries need to be mindful of broad-based changes in research and move of personal activity into the digital world.

I was stimulated and challenged by Cliff’s address. He gave us a view of the new world of information, which will be vastly different from the one we know today.

Don Hawkins
IL2006 Blog Coordinator and Columnist, Information Today