Wednesday, June 21, 2017

Lean Library Browser Extension



A colleague recently called my attention to a new library discovery product, the Lean Library browser extension. While a library who wishes to make this browser extension available to users must pay to get in configured to work with their electronic resources, library users who install the extension will get seamless access to the electronic resources licensed by their libraries, without requiring them to go to a library’s web site first. According to the Lean Library web site:


“It makes library services available right in the users workflow – where and when they are needed. One of those services is off campus access: the Lean Library browser extension simplifies the process of getting access to the e-resources that the library subscribes to. The browser extension works autonomously. Installing it requires a 'once only' installation process of two mouse clicks. The extension functions without the user having to subscribe, or register for an account. When used to simplify the process of getting access to licensed e-resources, it does not somehow provide 'free' access: users need to be affiliated with an academic or research institution that subscribes to those e-resources." 


The browser extension works with librarians to provide access to e-resources without making library users jump through all the usual hoops. They do not have to be in the library itself to access the resources through IP address authentication, and they do not have to remember their login information to access resources through a proxy server when they are away from the library.


In addition to its main purpose of simplifying access to licensed e-resources, there are some other features of Lean Library. It can be used to provide analytics about e-resource use. Also, if a user is trying to access an article that is not licensed through their library, Lean Library can re-direct them to an open access version, if it exists.


 More information about Lean Library can be found in this blog post on Musings About Librarianship.

Wednesday, June 14, 2017

LexisNexis acquires case analytics firm Ravel Law

Data is the name of the game! And now Ravel, a legal research and litigation firm, has proved data is very profitable. LexisNexis has acquired the firm and plans to use its technology to enhance Lexis services. Ravel uses machine-learning techniques to analyze litigation records and predict the behavior of judges, firms, and courts. Ravel is also working to complete a project with Harvard University to digitize all case law in the school's library. Ravel Law chief executive Daniel Lewis says Lexis will support the effort in providing public access and expanding materials with APIs. 

This acquisition shows the further utility and adoption of artificial intelligence in analytics tools created for legal research. Find the entire article at http://www.abajournal.com/news/article/lexisnexis_acquires_ravel_law

Tuesday, June 6, 2017

OCLC Works with Wikipedia to Link Citations to WorldCat

Sources are integral to verifying facts in articles, and OCLC has been working with Wikimedia’s Wikipedia Library to improve linking of citations to library materials in WorldCat. OCLC’s WorldCat Search API has been integrated into Wikimedia’s cite tool, an interface that “helps editors automatically generate and add citations that link back to resources represented in WorldCat.”

You can get more details in OCLC’s press release at http://www.oclc.org/en/news/releases/2017/201713dublin.html.

Monday, June 5, 2017

ALLStAR

We live in an increasingly data-driven world, and if you're anything like me you find playing with data and statistics fun, interesting, rewarding, and sometimes confusing and frustrating. It can also be time consuming - my colleagues and I spent more hours than I'd like to think about completing all the surveys to our reporting agencies this year. When there's a tool available to make all of this easier and less time-consuming, I'm immediately interested in learning more.

ALLStAR, the new tool created in partnership with NELLCO and Yale, is certainly fun to play with, and while it does have a learning curve, once you've spent a little time with it it can make your life easier.  It's preloaded with the last several years of survey data from the ABA, USNews, IPEDS, and other agencies, which gives us a jumping-off point for using the data. We've started to use it to benchmark things like collections spending, staffing levels, volumes and databases added, and records added to the catalog, but the possibilities don't stop there. If you're interested, I'd recommend checking out the link above, and attending the deep dive at the AALL Annual Meeting this year for a great hands-on workshop (we saw a version of it at NELLCO this year).

We've mostly been using it for these benchmarking tools from data preloaded from the major surveys; however, we're also going to start to use it to help us complete those surveys by setting up accounts for all of our staff to complete the ALLStAR Employee Questionnaire. ALLStAR talks to LibAnswers, which we use to track our reference statistics (and statistics from a variety of other library projects, like faculty requests and institutional repository work). After LibAnswers puts its data into the system, it should not be more onerous for staff to complete the Employee Questionnaire than it is for them to email the responses we need to complete the survey data. We are really hoping this cuts down on managerial time completing these surveys.

Finally, if enough schools begin to use ALLStAR, we could use it to define our own benchmarking and statistics that we want to keep. The information we send to various agencies (volume count, anyone?) is not necessarily useful for us. If we have the discretion to create our own tool from ALLStAR, we could begin to keep statistics that are truly meaningful - for internal tracking purposes, for reporting to stakeholders, and for benchmarking among ourselves.

ALLStAR has real possibilities to help us use statistics and data to our advantage. It will be even better if we use it as a community, especially if we decide to use it to track our own metrics as a group. I'd like to encourage everyone to take a look at it and see if we can really use this tool to make our lives easier.


Monday, May 22, 2017

Getting to Know TS Librarians: Stephan Licitra


1. Introduce yourself (name & position). 
Hi, my name is Stephan Licitra. I am the Technical Services Librarian for the State Law Library of Montana. I received my MLIS in 2015 and so this is my first professional position in libraries. Before I received my degree I worked and volunteered in public, academic and special libraries. Wherever I was, I greatly enjoyed learning about that library and what made it special. 

2. Does your job title actually describe what you do? Why/why not?
Yes and no. I am charged with acquiring, processing, cataloging, discarding library materials, some reference, working with the ILS and vendors. Some pretty traditional stuff. But people not familiar with libraries associate Technical Services to mean computers, and programming; which I don’t do. 

3. What are you reading right now?
Currently, I am reading William Durante’s, "The Renaissance."

4. You suddenly have a free day at work, what project would you work on?
I would spend the day tiding up the catalog records. As we are part of a larger consortium there are always more that can be done when it comes to data quality. Having good, consistent data will make it possible for great functionality in the future. 

Wednesday, May 17, 2017

Preservation of Electronic Government Information Project (PEGI)

A recent article in Against the grain highlights PEGI - the Preservation of Electronic Government Information Project.  This project is a two year initiative designed to address the growing awareness of the "serious ongoing loss of government information that is electronic in nature." Participants include the Center for Research Libraries, the Government Publishing Office, the University of North Texas, the University of Missouri, and Stanford University.

Historically, the print production workflow for government information helped insure that content was sent to NARA, GPO and depository libraries for preservation. Now that most government information is disseminated digitally, production workflows are variable, resulting in a larger volume of  "fugitive" publications.

According to the PEGI project narrative, the focus of of the project is "at-risk government digital information of long term historical significance." The project proposes focusing on "activities of triage, drilling down into agency workflows ... and undertaking advocacy and outreach efforts to raise awareness of the importance of preserving digital government information."  The project intends to undertake a comprehensive environmental scan, provide recommendations for information creators, and create and educational awareness and advocacy program.

A final goal is to create a PEGI Collaborative Agenda to identify collaborative actions to "make more electronic government information public, preservable, and preserved in multiple environments that include distributed sites in academic libraries and other heterogeneous locations that are indexed, contextualized and usable."

Library of Congress Releases Digital Catalog Records

The Library of Congress announced is making 25 million records from its online catalog available for free bulk download. This is the largest such release in the Library's history. The records can be found at loc.gov/cds/products/marcDist.php, and they are also available at data.gov.

From the Library's announcement:

“The Library of Congress is our nation’s monument to knowledge and we need to make sure the doors are open wide for everyone, not just physically but digitally too,” said Librarian of Congress Carla Hayden. “Unlocking the rich data in the Library’s online catalog is a great step forward. I’m excited to see how people will put this information to use.”

The new, free service will operate in parallel with the Library’s fee-based MARC Distribution Service, which is used extensively by large commercial customers and libraries.  All records use the MARC (Machine Readable Cataloging Records) format, which is the international standard maintained by the Library of Congress with participation and support of libraries and librarians worldwide for the representation and communication of bibliographic and related information in machine-readable form.

The data covers a wide range of Library items including books, serials, computer files, manuscripts, maps, music and visual materials.  The free data sets cover more than 45 years, ranging from 1968, during the early years of MARC, to 2014.  Each record provides standardized information about an item, including the title, author, publication date, subject headings, genre, related names, summary and other notes.