Tuesday, May 31, 2011

Free cataloging webinars

Catalogablog mentions a couple of free cataloging webinars coming up very soon:


June 2: Cataloging as Collaborative Librarianship: Partnering with Your Colleagues
This webinar, presented in collaboration with Libraries Unlimited, upon the publication of Practical Strategies for Cataloging Departments, will discussion how be more effective partners with your colleagues and leverage cataloging expertise. Three contributors to the edition from the University of New Mexico Libraries will examine the relationships and potential with other technical services partners, such as acquisitions and collection development, branching out into public services collaborations; and they will address how catalogers can take an active role in the growing area of digitization services.

More information and registration »

June 14: Cataloging Efficiencies That Make a Difference
OCLC Member Services staff have been traveling around the U.S. to hear how librarians have faced the challenge to streamline cataloging at a time of reduced budgets and staff. These discussions have provided a great opportunity to exchange practical tips on how to become more efficient—from defining "good enough" cataloging to collaborating on improved workflows, to sharing the latest on RDA and WorldCat quality. In this webinar, two academic librarians will share their experiences of reviewing and revising tech services workflows, and cataloging e-books. We'll also discuss the key trends and strategies provided by the hundreds of library staff who have contributed to the Good Practices for Great Outcomes series so far, and will end with a discussion of where we go from here. Presenters: Daphne Kouretas, OCLC; Helen Heinrich, California State University, Northridge; Debbi Dinkins, Stetson University.

More information and registration »

MARBI papers available for review

Catalogablog gives a long long list of papers available for review at ALA.

Thursday, May 26, 2011

Library of Congress May Begin Transitioning Away from MARC

Library of Congress May Begin Transitioning Away from MARC


The Library of Congress has announced that it is going to undertake a major reevaluation of bibliographic control in a move that could lead to a gradual transition away from the 40-year-old MARC 21 standard in which billions of metadata records are presently encoded.

"It's a ten," said Sally McCallum without hesitation when asked to rank the project's scope and importance on a scale of one to ten. McCallum is chief of the Network Development and MARC Standards Office at LOC.

The goal of the Bibliographic Framework Transition Initiative is to determine "what is needed to transform our digital framework" in the light of technological changes and budgetary constraints, said Deanna B. Marcum, the library's Associate Librarian for Library Services, who will lead the initiative. "It's very important that we find a way to link library resources to the whole world of information resources not focusing exclusively on bibliographic information," she said.

By rethinking MARC, which has supported resource sharing and cataloging cost savings for many years and is the predominant standard for the representation and communication of bibliographic and related information in machine-readable form, Marcum said that the LOC hopes to determine whether the standard can "evolve to do all the things we'd like it to do, or do we need to replace it" with something more compatible with the Internet world.

As the LOC concludes what to retain from current metadata encoding standards, the library community may eventually need to get comfortable with other data structures.

"We have a huge library infrastructure very much built up over the years around the MARC format, and this will cause some disruption of that and that costs something and it has to be done smartly and carefully," said McCallum. "We can go on as we are but it's not desirable," she said.

Inspired by RDA
The hope is that a move toward new data structures will "enable bibliographic data to be used in very new technologies and technical configurations, such as the semantic web," McCallum said.

"I think we need to go into some of these new data structures with more alacrity than we have," McCallum said. "It would behoove the community to get comfortable with other data structures, like XML or RDF."

There is also a desire in the library community to "reap the full benefits of new and emerging content standards," as indicated by the comments that accompanied the testing of the new Resource Description and Access (RDA) standard, Marcum said.

RDA is a cataloguing code which covers all types of content and media (including digital resources) and was released about a year ago to replace the Anglo-American Cataloguing Rules, 2nd Edition Revised (AACR2). Its development was a recognition that libraries operate in a digital environment and have to deal with metadata creators who are not librarians. RDA integrates library cataloguing records with this new metadata, but the testing raised further issues that have spurred the new initiative.

"Many people made the comment that while the new code [RDA] will allow us to better link the disparate resources that are available, there are inherent difficulties in using MARC as the carrier for the records we create in this new code. It's just time to get serious," she said.

The Working Group on the Future of Bibliographic Control, formed in 2006, also helped drive the new agenda.

"They raised this issue. I give that group credit for raising this issue of whether it is time to reevaluate the MARC standard," Marcum said. "And I think by focusing on that question it has increased the sensitivity of all of us to the barriers that exist in our current system to making information fully accessible," Marcum said.

Change will come slowly
The LOC intends any changes to be gradual.

"MARC is going to be around for another ten years. It's used too universally," McCallum said. "There are too many services and products based in MARC, and its use will simply dwindle as people convert and as they can afford to convert," she said.

"We want change with stability," McCallum said. The LOC is mindful that libraries have to contain costs even as they are being asked to provide cataloging metadata for the exploding amount of digital material.

The project will also:

* Foster maximum re-use of library metadata in the broader web search environment.
* Explore the use of data models such as Functional Requirements for Bibliographic Records (FRBR) in navigating relationships, whether those are actively encoded by librarians or made discernible by the semantic web.
* Plan for bringing existing metadata into new bibliographic systems within the broader Library of Congress technical infrastructure.

Marcum said the initiative will be "fully collaborative," and an initial discussion will take place in June at the annual conference of the American Library Association in New Orleans. A series of meetings with stakeholders are expected in 2012 and 2013.

From Michael Kelley, Library Journal, May 26, 2011

Wednesday, May 25, 2011

NISO Recommended Practice on SSO Authentication open for comments

NISO Recommended Practice on Single Sign-On Authentication Available for Public Comment Identifies Needed Improvements for Users Authenticating to Licensed Electronic Resources

NISO announces the availability of ESPReSSO: Establishing Suggested Practices Regarding Single Sign-On (NISO RP-11-201x) for public comment through June 22, 2011. ESPReSSO identifies practical solutions for improving the use of single sign-on authentication technologies to ensure a seamless experience for the user.

Currently a hybrid environment of authentication practices exists, including older methods of userid/password, IP authentication, or proxy servers along with newer federated authentication protocols such as Athens and Shibboleth.
This recommended practice identifies changes that can be made immediately to improve the authentication experience for the user, even in a hybrid situation, while encouraging both publishers/service providers and libraries to transition to the newer Security Assertion Markup Language (SAML)-based authentication, such as Shibboleth.

"With the growing use of mobile devices and remote access, the older authentication methods are not manageable for either the content provider or the library," explains Steve Carmody, IT Architect, Computing and Information Services, at Brown University and co-chair of the NISO ESPReSSO Working Group. "The ESPReSSO recommendations will help bridge the transition to more robust authentication methods that better match the needs of today's users and eliminate the need for multiple identities."

"Libraries are very concerned about protecting the privacy of their patrons," states Harry Kaplanian, Director of Technology, Serials Solutions, Inc., and co-chair of the NISO ESPReSSO Working Group. "These recommendations identify methods that can be used to maintain privacy while still offering users advanced functionality, such as saving searches between sessions."

"NISO is testing various methods for identifying issues in our community where NISO can provide leadership in developing solutions," states Todd Carpenter, Managing Director of NISO. "The ESPReSSO recommended practice is the first outcome of a Chair's Initiative project, where the NISO Board of Directors Chair (then Oliver Pesch from EBSCO Information Services) identifies a specific issue that would benefit from study and the development of a recommended practice or standard."

The draft Recommended Practice and an online comment form are available at:
www.niso.org/workrooms/sso/. Publishers and distributors of licensed content as well as licensing organizations, such as libraries, are all encouraged to review and comment on the document.



Cynthia Hodgson
NISO Technical Editor Consultant
National Information Standards Organization
Email: hodgsonca@verizon.net
Phone: 301-654-2512

(Reposted from SERIALST list)

Tuesday, May 24, 2011

Transforming our Bibliographic Framework: A Statement from the Library of Congress (May 13, 2011)

The Library of Congress recently issued a statement to announce the launch of an initiative at LC to "analyze the present and future environment, identify the components of the [bibliographic] framework to support our users, and plan for the evolution from our present framework to the future—not just for the Library of Congress, but for all institutions that depend on bibliographic data shared by the Library and its partners. [ ...] The Library now seeks to evaluate how its resources for the creation and exchange of metadata are currently being used and how they should be directed in an era of diminishing budgets and heightened expectations in the broader library community." The full announcement may be read at: http://www.loc.gov/marc/transition/news/framework-051311.html.

Library of Congress News and Announcements, 5/24/2011

Friday, May 20, 2011

How Google pagerank actually works

An interesting article in the New York Times a few months ago reveals some hints about how Google's parg ranking actually works. the article is available here: http://www.nytimes.com/2011/02/13/business/13search.html?_r=2&partner=rss&emc=rss

The article discusses how, over the last holiday season, Penney's came up number one in searches for everything from dresses to luggage to area rugs. The Times got an expert,  Doug Pierce, of Blue Fountain Media to look into the mystery, and "what he found suggests that the digital age’s most mundane act, the Google search, often represents layer upon layer of intrigue. And the intrigue starts in the sprawling, subterranean world of “black hat” optimization, the dark art of raising the profile of a Web site with methods that Google considers tantamount to cheating."

Essentially, Penney's, or somebody acting for them, got thousands of unrelated websites -- mostly set up for exactly this purpose -- to link, via phrases like "casual dresses" to Penney's website. If you get enough of these trivial links, it really does raise your page rank.

A very interesting article about just how search ranking works, how Google works to prevent people gaming the system, and how occasionally, they miss.

Google filtering

Google is now personalizing searches, i.e., editing search results, which creates what one critic calls the "filter bubble." What this means is that the same search on Google by different people will yield different results, depending on who Google thinks is doing the searching -- and there is no way of knowing who Google thinks that is. This filtering is invisible and based on such complicated code that even Google developers could not explain search results. 
 
One thing this means is that the old "page rank" system is no longer operative. This may have implications for Google Scholar and Google Books. For example, if I'm a 9/11 conspiracy theorist, Google will prioritize links confirming my perspective and filter out links that conflict with it.
 
Google's filter bubble is discussed in a few places that I've seen. There is a 52-minute program at http://www.kqed.org/a/forum/R201105191000
 
It's discussed here and here on Rene Pickard's blog (the first of those links providing a guess at some of the 57 factors going in to Google's filtering; the second a link to a discussion by Eli Pariser on the filter bubble.)
 
Edited to add: Here's a list by Eli Pariser of 10 things you can do to "pop your filter bubble.: 

University of Chicago opens new library with automated retrieval system

Last week, the University of Chicago opened the Joe and Rika Mansueto Library, notable for its on-site, underground high-density storage system and the absence of browseable book stacks. Described by Inside Higher Ed as a "Batcave for the Ph.D. crowd," the storage facility has room to store 3.5 million volume equivalents. It represents a middle ground between off-site book storage and overcrowded stacks in campus libraries. Library users identify the resources they want by searching the online catalog. The loss of browseablilty heightens the importance of complete and accurate catalog records. Five robotic cranes are deployed to retrieve materials requested by users from among the 24,000 storage bins, a process that reportedly takes less than five minutes.

Read the complete Inside Higher Ed story, and watch a short informational video at:
http://www.insidehighered.com/news/2011/05/18/chicago_library_solves_shelf_space_question_by_burrowing_underground_using_robots

Friday, May 13, 2011

Some Comparisons between LOCKSS and Portico

Seadle, Michael. “Archiving in the networked world: by the numbers.” Library Hi Tech, 29 (1) 2011. At: http://www.emeraldinsight.com/journals.htm?issn=0737-8831&volume=29&issue=1&articleid=1912311&show=abstract
This paper examines the overlap of journals archived in both LOCKSS and in Portico, and the publishers included in the two archives. The findings show a significant overlap among the archiving systems. They also show that Portico has no prejudice against small publishers and that large publishers are as willing to choose the LOCKSS software as to choose Portico. LOCKSS does, however, archive many more small and arguably endangered publishers and may be the only economically viable choice for them.