Monday, September 28, 2015

BitCurator

From 2011-2014 a team at the School of Information and Library Science at UNC-Chapel Hill and the Maryland Institute for Technology in the Humanities worked to develop a structure of incorporated digital forensics tools that was accessible to librarians, archivists, and their peers. What they developed was an environment that collected free and open source tools into a suite of resources for anyone working with a digital curation workflow. Known as BitCurator, the environment is available for local download, running in a Linux environment, or can be accessed via a virtual machine. As we look to preserve a progressively digital environment the tools in the BitCurator suite will prove to be increasingly more valuable to ensure the authenticity and preservation of materials in addition to addressing the privacy concerns of the item creators.

The creation of BitCurator was just the beginning and an additional grant has extended the project, now known as BitCurator Access for an additional 2 years. This project seeks to expand upon the foundation set up in the initial project and to develop tools to help in streamlining the process for information professionals.


To support the future of BitCurator, the developers have set up the BitCurator Consortium. The mission of the consortium is to support digital forensics practices in libraries, archives, and museums and to help preserve and provide authentic access to our digital records through the sharing of resources and the improvement of the BitCurator environment. 

Friday, September 25, 2015

Thinking Inside the (Pizza) Box with Semantic Web Concepts

If you work in technical services, you probably have at least heard the phrases "Linked Data" and "Semantic Web" among the things that we librarians are supposed to be concerned about when we think about the future of bibliographic data. If you're like me, you may find that it is hard to get a sense of what a practical application of these concepts would look like.

In a recent blog post on VoxPopuLII, Amy Taylor of American University Washington discussed her efforts to start thinking about a legal research ontology.  This blog post offers several things that can be helpful in getting a more practical understanding of Semantic Web concepts.

First, Amy mentions the book Semantic Web for Dummies as being a useful starting place for her own learning. Also, she describes the software Protege as being a tool for developing ontologies. Specifically, she mentions a tutorial called Pizzas in 10 Minutes, where you can use Protege to develop an ontology for pizzas. This looks like just the kind of hands-on practice I've been looking for.

Finally, Amy's rough sketch of her own legal research ontology is extremely useful in mapping out how Semantic Web ontologies might be useful in the world of law librarianship. This blog post is packed with useful information about applying Semantic Web concepts to the library world.

Wednesday, September 23, 2015

Examining FRBR

Many of us (actually, hopefully all of us) have been paying attention over the past decade or two as FRBR (Functional Requirements for Bibliographic Description) has become increasingly prominent in cataloging theory. In fact, the FRBR conceptual model underlies much of RDA and is the reason for many discussions about Jane Austen’s various works, expressions, manifestations, and items during RDA training. However, I know I am not the only one disappointed in the fact that RDA isn’t fully realized in our current online catalog environment, where the catalogs have yet to be “FRBR-ized” and therefore remain unable to demonstrate some of the touted benefits of RDA.

Karen Coyle’s forthcoming book “FRBR, Before and After: A Look at Our Bibliographic Models” promises to examine this exact topic: the promises and pitfalls of the (current) leading conceptual model. Personally, I am excited to see a detailed discussion of where we’ve been, where we are now, and where we seem to be headed in regards to bibliographic models. Also of note is the book's inclusion of a discussion of technology and its effect on library data and data modeling.

Also of interest is that while the book will be released in print in November 2015, it will also be released in early 2016 as open access. It will be interesting to watch this publishing model and see if it is viable, not just for the publisher, but for the book’s audience. For more details on the book, check out Coyle’s announcement and the book’s afterword.

Wednesday, September 16, 2015

Law Libraries, Looseleafs, and Print - Oh My!

Source: Wikimedia Commons, the free media repository
Few law librarians these days are sheltered from the battles of print vs. electronic waging war across our lands.  A common site for skirmishes is the "Land of Looseleafs" - do we get an adequate return on the investment we make in these materials? Take a look at what our neighbors north of the border at Slaw have to say about the pains and gains of loose-leaf publications in a world that's becoming increasingly digital:
For a more in-depth look at what North American law libraries are currently spending and plan to spend on print materials, including loose-leafs, you can order the Primary Research Group's recent publication "Law Library Plans for the Print Materials Collection".  

Incidentally, the sample sets of statistics provided in their press release caused one DePaul law librarian, Mark Giangrande, to make an interesting observation: "We in the academic business try to prepare students for the tools that they can expect to use in practice. If law firms are buying less print... why are academic libraries still buying at a much higher percentage?"  Why indeed, Mark? Why indeed?

References:

Wednesday, September 9, 2015

MarcEdit 6.1 (Windows/Linux) & MarcEdit Mac (1.1.25)

Many AALL TS-SIS members use MarcEdit in their daily work. Terry Reese the developer of MarcEdit has been working on an OS X version for most of the summer of 2015; but has also worked on some major features for the Windows version.

The OS X version has recently moved into "release" and Terry wrote about some of the major features and how much functional parity there is between the OS X and Windows/Linux versions: http://blog.reeset.net/archives/1791

For the Windows release there are two major features one is a "Build new field" tool and a access point (heading) validation tool. Be sure to check out the new headings validation functionality in the Windows release! Terry posted about how he expects the validate heading feature to work at: http://blog.reeset.net/archives/1775

The changelog for this recent release follows:

****************************
** 1.1.25 ChangeLog
****************************

Bug Fix: MarcEditor — changes may not be retained after save if you make manual edits following a global updated.
Enhancement: Delimited Text Translator completed.
Enhancement: Export Tab Delimited complete
Enhancement: Validate Headings Tool complete
Enhancement: Build New Field Tool Complete
Enhancement: Build New Field Tool added to the Task Manager
Update: Linked Data Tool — Added Embed OCLC Work option
Update: Linked Data Tool — Enhance pattern matching
Update: RDA Helper — Updated for parity with the Windows Version of MarcEdit
* Update: MarcValidator — Enhancements to support better checking when looking at the mnemonic format.
If you are on the Windows/Linux version – you’ll see the following changes:

*************************************************
* 6.1.60 ChangeLog
*************************************************

Update: Validate Headings — Updated patterns to improve the process for handling heading validation.
Enhancement: Build New Field — Added a new global editing tool that provides a pattern-based approach to building new field data.
Update: Added the Build New Field function to the Task Management tool.
UI Updates: Specific to support Windows 10.

Wednesday, September 2, 2015

Re-Envisioning the MLS

On August 1, 2015, the University of Maryland iSchool released Re-Envisioning the MLS: findings, issues, and considerations, an attempt to predict the future of the MLS. Reading through the document, it is hard to see where those of us working with traditional metadata- MARC catalogers - fit into this vision of the future. The report is a product of the iSchool's Re-Envisioning the MLS initiative, launched in August 2014, and is intended to answer questions such as "What is the value of an MLS degree?", "What should the future MLS degree look like?" and "What are the competencies, attitudes, and abilities that future library and information professionals need?"

Key findings listed in the executive summary are listed:
  • The shift in focus to people and communities
  • Core values remain essential
  • Competencies for future information professionals
  • The MLS may not be relevant/necessary in all cases
  • Access for all
  • Social innovation and change
  • Working with data and engaging in assessment
  • Knowing and leveraging the community
  • Learning/learning sciences, education and youth
  • Digital assets and archival thinking

The "core competencies" for future information professionals include, "the ability to lead and manage projects and people; to facilitate learning and education ...  Additionally, information professionals need marketing and advocacy skills; strong public speaking and written communication skills; a strong desire to work with the public; problem-solving and the ability to think and adapt instantaneously; knowledge of the principles and applications of fundraising, budgeting, and policymaking; and relationship building among staff, patrons, community partners, and fundraisers."

Perhaps our work is described in a deeper level of the report. Reading through the detail under "core values remain essential", one finds, among others, the concept of "Preservation and Heritage". This is described as "providing current and future access to records, both analog and digital." Another piece of our work seems to be categorized under "Working with Data and Engaging in Assessment", with a stated need for professionals who can "manage data assets and understand digital curation techniques." Under "Digital assets and archival thinking", the importance of information professionals who can help communities manage, curate  and preserve their digital assets is mentioned.

Finally, in a table intended to summarize key topical areas of a future MLS curriculum, one of nine suggested content areas is "Digital Asset Management", described as the "ability to create, store, and access digital assets." Skills listed in this area are metadata, information organization, data storage and access/retrieval systems. It is interesting to note that the skills we think of as "cataloging" are only seen as applying to digital resources. Although this document is focused on the future, one feels a need to say "I'm not dead yet!" on behalf of more traditional metadata and resources.