Planet Cataloging

March 29, 2017

OCLC Next

With big data, answers drive questions

2017-03-28-With-Big-Data-Answers-Drive-Questions

Usually, when we search for a solution, we start with a question and then seek out answers. According to Viktor Mayer-Schönberger, one of the plenary speakers at the 2017 OCLC EMEA Regional Council Meeting in Berlin, big data flips that equation on its head.

Tying into the event’s theme, “Libraries at the Crossroads: Resolving Identities,” Viktor explained that big data is all about gaining new perspectives on the world. It is revolutionizing what we see, and how we process information. And he explained that with big data, we start with answers—what the data tells us—and then go back to fill in appropriate questions and hypotheses.

As a Professor at Oxford University’s Internet Institute and author of Big Data: A Revolution That Will Transform How We Live, Work, and Think, Viktor also explained that every additional data point is an opportunity to boost customer services and find new synergies. He talked about the quantity of big data translating into a new capability to make sense of patterns.


Librarians have been big data crunchers in collecting bibliographic data. How do we move these efforts forward?
Click To Tweet


As I thought about his presentation, I wondered about the impact of big data on libraries. In our own way, we librarians have been big data crunchers for decades. We’ve made great strides in collecting bibliographic data at scale. So how do we move these efforts forward?

Positioning libraries for big data success

Big data has made processing large collections of data inexpensive and fast. It provides the ability for forward-looking decision-making based on data from multiple, disparate data sources.

Some recent opportunities include:

Curating research data. University researchers and government agencies manage and preserve massive digital assets—images, text and data—that require integrated management and preservation programs. These data include project proposals, grant proposals, researcher notes, researcher profiles, datasets, experiment results, article drafts and copies of published articles. The library’s role in connecting and curating these institutional assets is needed and a big opportunity for new services. OCLC Research scientists are exploring topics related to data curation and libraries with an eye toward distinctive services that will support research missions.

Aggregating library data. We are leveraging members’ collected knowledge investment for efficiency and re-use by libraries and other organizations. One example is the Virtual International Authority File (VIAF), which virtually combines multiple name authority files into a single dataset. By linking disparate names for the same person or organization, VIAF provides a convenient means for a wider community of libraries and other agencies to repurpose bibliographic data produced by libraries that serve different language communities. VIAF became an OCLC service in 2012 and today, 25 national libraries from 30 countries are represented in the cooperative data file.

Managing collection data. As libraries move from locally owned to jointly managed print collections, good data about collections can help establish priorities and focus. When aggregated and analyzed across many libraries (through programs such as Sustainable Collections Services), collections data can suggest patterns and provide insights that inform management decisions. We anticipate that a large part of existing print collections, spread across many libraries, will move into coordinated or shared management within a few years. While quantitative data must be used carefully, information about overlap and usage can supplement the judgment of librarians.

Getting ready for the future

The “Crossroads” theme of the conference was woven through many of the presentations, discussions and conversations I heard. But big data cuts across many of the topics presented, such as issues of digitization, research information management and institutional identities.

Library services will clearly be increasingly affected by big data—but here’s a thought-provoking question: Will the data be our own, or that which comes from an increasingly connected and monitored world? Will we be able to collect data from thousands of institutions in ways that present answers for which we can formulate library-specific questions? Or will we be stuck trying to adjust our inquiries and plans based on data collected elsewhere?

We are still in the early days of aggregating all sorts of new and exciting library data. Indeed, library big data might play a crucial role in framing questions about education, authority and literacy outside the spheres of commercial interest—if we can successfully navigate these crossroads together.

The post With big data, answers drive questions appeared first on OCLC Next.

by Andreas Schmidt at March 29, 2017 05:12 PM

March 28, 2017

025.431: The Dewey blog

How many zeros are needed for standard subdivisions?

When you are training new classifiers on WebDewey, do you know a quick way to help them identify how many zeros are needed for standard subdivisions in a specific place, and also help them understand why extra zeros are sometimes needed? In most cases, the short answer is to check the Hierarchy box.

First, let’s review the general instructions. At the start of Table 1 is found this instruction (in WebDewey see the Notes box at T1—0 Table 1. Standard Subdivisions):

Never use more than one zero in applying a standard subdivision unless instructed to do so. If more than one zero is needed, the number of zeros is always indicated in the schedules. If the 0 subdivisions of a number in a schedule are used for special purposes, use notation 001-009 for standard subdivisions; if the 00 subdivisions also are used for special purposes, use notation 0001-0009 for standard subdivisions

Similar wording is found in the Introduction (section 8.6).  The statement "If more than one zero is needed, the number of zeros is always indicated in the schedule" means that classifiers need to look in the schedules for patterns to copy. 

In WebDewey, the Hierarchy box is a good place to look for those patterns because it offers a summary of the subdivisions of a number. For example, here is the Hierarchy box for 616 Diseases:

616

The pattern to follow for standard subdivisions is clear; two zeros are needed:

616.001-616.009 Standard subdivisions

 Why two zeros?  The single-zero subdivisions are used for special purposes, as shown in these entries in the Hierarchy box:

616.02 Special topics of diseases

616.04 Special medical conditions

616.07-616.09 Pathology, psychosomatic medicine, case histories

At 616 Diseases is the class-here note: "Class here clinical medicine, evidence-based medicine, internal medicine." Following the pattern for standard subdivisions shown at 616.001-616.009, we class a work of review and exercise for clinical medicine in 616.0076 (616 + 0 + 076 from T1—076 Review and exercise).

Standard subdivisions are not always labeled "standard subdivisions" when they appear in the schedules; classifiers need to be familiar with the standard subdivisions so that they can recognize notation and captions (including variants of captions) from Table 1 when they see them in the Hierarchy box.  For example, when classifiers see the following in WebDewey: 

373c

They should immediately know that 373.01, 373.02, 373.06, 373.08, and 373.09 are all built with standard subdivisions, because they should recognize T1—01 Philosophy and theory, T1—02 Miscellany, T1—06 Organizations and management, etc. In this Hierarchy box, there are no entries with extra zeros. The pattern shows that a single zero should be used with standard subdivisions. We class a work about urban high schools in 373.091732 (built with 373 + 091 from T1--091 Areas, regions, places in general + 732 from T2--1732 Urban regions).

Another example is the Hierarchy box for 613 Personal health and safety:

613

Again, there are no entries with extra zeros. The only single-zero subdivision used for special purposes is 613.04 Personal health of people by gender, sex, or age group.  Among the standard subdivisions, T1—04 Special topics is intended to be used only for special purposes.  It has the note: "Use this subdivision only when it is specifically set forth in the schedules." If T1—04 Special topics is the only single-zero subdivision used for special purposes, the rest of the standard subdivisions do not need extra zeros.  The bracketed standard subdivisions—613[.081-613.084] People by gender, sex, or age group—also show the single zero; that record has the do-not-use note: "Do not use; class in 613.04."

The four single-zero subdivisions shown in the Hierarchy box that are marked with orange puzzle pieces are all built with standard subdivisions. The orange puzzle pieces mark these as built numbers that in the print DDC would appear only in the Relative Index; they have index entries serving as captions instead of the standard subdivisions captions that would be used if they were intended to appear in the print schedule:

613.019 Personal health—psychological aspects

613.071 Health—education

613.087 Disabled people—health

613.092 Hygienists

Here are the relevant standard subdivisions:

T1—019 Psychological principles

T1—071 Education

T1—087 People with disabilities and illnesses, gifted people

T1—092 Biography

At 613 Personal health and safety is the standard-subdivisions-are-added note: "Standard subdivisions are added for personal health and safety together, for personal health alone." We class an encyclopedia of personal health in 613.03 (613 + 03 from T1—03 Dictionaries, encyclopedias, concordances).

What if no zero subdivisions are shown in the Hierarchy box?  For example, no zero subdivisions of 613.262 Vegetarian diet are shown in the Hierarchy box:

613262

If no zero subdivisions are shown in the Hierarchy box, and there is no add note in the record, then the general rule applies: "Never use more than one zero in applying a standard subdivision unless instructed to do so." There is no add note in the record for 613.262. Hence we class a history of vegetarian diet in 613.26209 (613.262 + 09 from T1—09 History, geographic treatment, biography).

What if there is an add note in the record?  Then we don’t know how many zeros are needed until we read the add note.  For example, here is the Hierarchy box for 616.723 *Rheumatism:

616723

No zero subdivisions of 616.723 are shown in the Hierarchy box; however, we must consult the Notes box:

616723Notes

The add note is a footnote marked with an asterisk (*):

*Add as instructed under 616.1-616.9

The add note leads to 616.1-616.9 Specific diseases, which has this Hierarchy box:

61616169

This hierarchy box first summarizes a large add table, then the other subdivisions under 616.1-616.9.  Each add table entry is shown with 616.1-616.9 plus a colon (:) plus the add table notation and its caption, e.g., 616.1-616.9:001 Philosophy and theory.  The Hierarchy box shows that standard subdivisions require two zeros because the single-zero notation is used for special purposes.  The full add table appears in the Notes box; here is the first part of that large box:

  61616169Notes

 

We class an encyclopedia of rheumatology in 616.723003 (616.723 + 0 + 03 from T1—03 Dictionaries, encyclopedias, concordances).

For special instructions on using the WebDewey number building tool to add standard subdivisions to three-digit numbers ending in zero, see these previous posts: part 1, part 2.

In some cases it is not easy to show patterns for standard subdivisions, and instead special instructions are given in add notes. In those cases—unless there happens to be a built number that shows the pattern—the Hierarchy box is not particularly helpful; classifiers must rely on the add notes. An example can be found at 337.3-337.9 Foreign economic policies and relations of specific jurisdictions and groups of jurisdictions.  Here is the Hierarchy box:

33733379

Here are the relevant add notes from the Notes box:

Add to base number 337 notation T2—3-T2—9 from Table 2, e.g., economic policy of United Kingdom 337.41; then, for foreign economic relations between two jurisdictions or groups of jurisdictions, add 0* and to the result add notation T2—1-T2—9 from Table 2, e.g., economic relations between United Kingdom and France 337.41044

*Add 00 for standard subdivisions; see instructions at beginning of T1—0

The main add note shows how the single-zero subdivisions are used for a special purpose—to show foreign economic relations between two jurisdictions or groups of jurisdictions—and the footnote marked with an asterisk (*) indicates that when only one jurisdiction or group of jurisdictions is specified and then a standard subdivision is added, two zeros are needed.  A history of United States foreign economic relations is classed in 337.73009 (337 + 73 from T2—73 United States + 0 + 09 from T1—09 History, geographic treatment, biography).

At the start of Table 5—at T5—0 Table 5. Ethnic and National Groups in WebDewey—are special instructions.  Here are key portions copied from the Notes box:

Except where instructed otherwise, and unless it is redundant, add 0 to the number from this table and to the result add notation T2—1 or T2—3-T2—9 from Table 2 for area in which a group is or was located, e.g., Germans in Brazil T5—31081, but Germans in Germany T5—31; Jews in Germany or Jews from Germany T5—924043. If notation from Table 2 is not added, use 00 for standard subdivisions; see below for complete instructions on using standard subdivisions

. . . .

When Table 5 notation is not followed by 0 plus notation from Table 2, use 00 for standard subdivisions, e.g., periodicals about sociology of Japanese 305.8956005, collected biography of Irish Americans in New York City 974.71004916200922. When Table 5 notation is followed by 0 plus notation from Table 2, however, use 0 for standard subdivisions, e.g., periodicals about sociology of Japanese Americans 305.895607305. (For the purpose of this rule, notation T5—96073 African Americans is treated as Table 5 notation, e.g., periodicals on sociology of African Americans 305.896073005, periodicals on sociology of African Americans in Ohio 305.896073077105) 

Classifiers need to be aware of the instructions at the start of Table 5 when adding notation from Table 5, because those instructions are not repeated in the schedules where add notes say to add notation from Table 5.   

by Juli at March 28, 2017 05:27 PM

March 27, 2017

TSLL TechScans

Getting to Know TS Librarians: (Renee Chapman Award Winner) Jean Pajerek


1. Introduce yourself.
I'm Jean Pajerek and I am the Director for Information Management at Cornell Law Library.

2. Does your job title actually describe what you do? Why/why not?

My title used to be “Head of Technical Services.” This conveys something to people who work in libraries, but many people outside of libraries think (understandably) that it means I’m the head of IT. Quite a few years ago, we realized that tech services staff were involved in activities beyond traditional tech services work; for example, I am the administrator for our institutional repository. We think “Information Management” is more inclusive while also being sufficiently vague so that people still do not know exactly what it is we do!

3. What are you reading right now?
For recreational reading, I am just finishing up Louise Penny’s “How the Light Gets In,” which I really enjoyed. Louise Penny writes mysteries set in Quebec, a place I love visiting. For work, I am reading “Semantic Web for the Working Ontologist,” by Allemang and Hendler.

4. You suddenly have a free day at work, what project would you work on?
If I suddenly had a free day at work, I would want to spend it working on my upcoming Deep Dive program for AALL in Austin, Linked Data on Your Laptop. I want to provide a really eye-opening learning experience for the program participants, and that’s going to take a lot of work!

by noreply@blogger.com (Lauren Seney) at March 27, 2017 01:11 PM

March 23, 2017

Terry's Worklog

MarcEdit and Alma Integration: Working with holdings data

Ok Alma folks,

 I’ve been thinking about a way to integrate holdings editing into the Alma integration work with MarcEdit.  Alma handles holdings via MFHDs, but honestly, the process for getting to holdings data seems a little quirky to me.  Let me explain.  When working with bibliographic data, the workflow to extract records for edit and then update, looks like the following:

 Search/Edit

  1. Records are queried via Z39.50 or SRU
  2. Data can be extracted directly to MarcEdit for editing

 

Create/Update

  1. Data is saved, and then turned into MARCXML
  2. If the record has an ID, I have to query a specific API to retrieve specific data that will be part of the bib object
  3. Data is assembled in MARCXML, and then updated or created.

 

Essentially, an update or create takes 2 API calls.

For holdings, it’s a much different animal.

Search/Edit:

  1. Search via Z39.50/SRU
  2. Query the Bib API to retrieve the holdings link
  3. Query the holdings link api to retrieve a list of holding ids
  4. Query each holdings record API individually to retrieve a holdings object
  5. Convert the holdings object to MARCXML and then into a form editable in the MarcEditor
    1. As part of this process, I have to embed the bib_id and holdin_id into the record (I’m using a 999 field) so that I can do the update

 

For Update/Create

  1. Convert the data to MARCXML
  2. Extract the ids and reassemble the records
  3. Post via the update or create API

 

Extracting the data for edit is a real pain.  I’m not sure why so many calls are necessary to pull the data.

 Anyway – Let me give you an idea of the process I’m setting up.

First – you query the data:

Couple things to note – to pull holdings, you have to click on the download all holdings link, or right click on the item you want to download.  Or, select the items you want to download, and then select CTRL+H.

When you select the option, the program will prompt you to ask if you want it to create a new holdings record if one doesn’t exist. 

 

The program will then either download all the associated holdings records or create a new one.

Couple things I want you to notice about these records.  There is a 999 field added, and you’ll notice that I’ve created this in MarcEdit.  Here’s the problem…I need to retain the BIB number to attach the holdings record to (it’s not in the holdings object), and I need the holdings record number (again, not in the holdings object).  This is a required field in MarcEdit’s process.  I can tell if a holdings item is new or updated by the presence or lack of the $d. 

 

Anyway – this is the process that I’ve come up with…it seems to work.  I’ve got a lot of debugging code to remove because I was having some trouble with the Alma API responses and needed to see what was happening underneath.  Anyway, if you are an Alma user, I’d be curious if this process looks like it will work.  Anyway, as I say – I have some cleanup left to do before anyone can use this, but I think that I’m getting close.

 

–tr

by reeset at March 23, 2017 11:52 AM

March 22, 2017

Terry's Worklog

Truncating a field by a # of words in MarcEdit

This question came up on the listserv, and I thought that it might be generically useful that other folks might find it interesting.  Here’s the question:

I’d like to limit the length of the 520 summary fields to a maximum of 100 words and adding the punctuation “…” at the end. Anyone have a good process/regex for doing this?
Example:
=520  \\$aNew York Times Bestseller Award-winning and New York Times bestselling author Laura Lippman’s Tess Monaghan—first introduced in the classic Baltimore Blues—must protect an up-and-coming Hollywood actress, but when murder strikes on a TV set, the unflappable PI discovers everyone’s got a secret. {esc}(S2{esc}(B[A] welcome addition to Tess Monaghan’s adventures and an insightful look at the desperation that drives those grasping for a shot at fame and those who will do anything to keep it.{esc}(S3{esc}(B—San Francisco Chronicle When private investigator Tess Monaghan literally runs into the crew of the fledgling TV series Mann of Steel while sculling, she expects sharp words and evil looks, not an assignment. But the company has been plagued by a series of disturbing incidents since its arrival on location in Baltimore: bad press, union threats, and small, costly on-set “accidents” that have wreaked havoc with its shooting schedule. As a result, Mann’s creator, Flip Tumulty, the son of a Hollywood legend, is worried for the safety of his young female lead, Selene Waites, and asks Tess to serve as her bodyguard. Tumulty’s concern may be well founded. Recently, a Baltimore man was discovered dead in his home, surrounded by photos of the beautiful—if difficult—aspiring star. In the past, Tess has had enough trouble guarding her own body. Keeping a spoiled movie princess under wraps may be more than she can handle since Selene is not as naive as everyone seems to think, and instead is quite devious. Once Tess gets a taste of this world of make-believe—with their vanities, their self-serving agendas, and their remarkably skewed visions of reality—she’s just about ready to throw in the towel. But she’s pulled back in when a grisly on-set murder occurs, threatening to topple the wall of secrets surrounding Mann of Steel as lives, dreams, and careers are scattered among the ruins.
So, there isn’t really a true expression that can break on number of words, in part, because how we define word boundaries will vary between different languages.  Likewise, the MARC formatting can cause a challenge.  So, the best approach is to look for good enough – and in this case, good enough is likely breaking on spaces.  My suggestion is to look for 100 spaces, and then truncate.
In MarcEdit, this is easiest to do using the Replace function.  The expression would look like the following:
Find: (=520.{4})(\$a)(?<words>([^ ]*\s){100})(.*)
Replace: $1$2${words}…
Check the use regular expressions option. (image below).
So why does this work.  Let’s break it down.
Find:
(=520.{4}) – this matches the field number, the two spaces related to the mnemonic format, and then the two indicator values.
(\$a) – this matches on the subfield a
(?<words>([^ ]*\s){100}) – this is where the magic happens.  You’ll notice two things about this.   First, I use a nested expression, and second, I name one.  Why do I do that?  Well, the reason is because the group numbering gets wonky once you start nesting expressions.  In those cases, it’s easier to name them.  So, in this case, I’ve named the group that I want to retrieve, and then have created a subgroup that matches on characters that aren’t a space, and then a space.  I then use the qualifier {100}, which means, must match at least 100 times.
(.*) — match the rest of the field.
Now when we do the replace, putting the field back together is really easy.  We know we want to reprint the field number, the subfield code, and then the group that captured the 100 units.  Since we named the 100 units, we call that directly by name.  Hence,
Replace:
$1 — prints out =520  \\
$2 — $a
${words} — prints 100 words
… — the literals
And that’s it.  Pretty easy if you know what you are looking for.
–tr

by reeset at March 22, 2017 08:34 PM

OCLC Next

Build joy into your library’s website

What libraries can learn from eCommerce

I’m passionate about Web analytics. This passion ignited before I came to OCLC as I’ve spent most of my career working on eCommerce teams for brands like American Eagle Outfitters and DSW. eCommerce teams use web analytics to optimize experiences for shoppers to ensure that they can find what they are looking for and ultimately click that purchase button.

Honestly, we often pushed past passion to complete obsession. We used to get our key metrics emailed to us every hour on the hour before one VP requested that the emails stop coming out after midnight so the team could get some sleep. Since I’ve been here at OCLC, I’ve found that a lot of what we do in eCommerce can be leveraged for improving library websites as well.

Joyful stacks

When I first joined OCLC, I reflected upon how my new calling intersected with my favorite user experience quote by Don Norman, who has done a lot of interesting things in his career, including work in the Psychology Department at University of California, San Diego. I love this quote from him:

“It is not enough that we build products that function, that are understandable and usable, we also need to build products that bring joy and excitement, pleasure and fun, and, yes, beauty to people’s lives.”

book_cover_postThat sure fits into our mission, doesn’t it? It reminds me of the first time I ever experienced a library. I was around five and still remember that magical promise: “Choose any book you want.” As I entered our small public library, I saw a long book sticking out of the bottom shelf. I remember pulling it out and seeing the Jumanji cover.

I loved that book, and it’s one of the most joyful memories of my childhood. Do you remember your first memory of a library?

So, libraries’ physical spaces already bring joy and beauty to people’s lives. I think our goal should be to make our online presences as amazing and as joyful as the in-person experiences. Web analytics can help.

You can’t improve what you don’t measure

Web analytics will help you spot trends and behaviors about your users so that you can make adjustments that improve their experiences. We recently did a survey of our members about library website redesign projects. The majority of respondents indicated that redesign projects were top of mind, either in-flight or just completed.

Surprisingly, 41% of those working on website redesign improvements told us that they did NOT plan to use web analytics to track those improvements. If that’s the case, how will you know if you’ve logically organized the content for your users?

Are people wandering out of your library?

Imagine a person coming into your library, getting as far as the front lobby, then turning around and leaving. Another person does that…then another. That would make you reconsider your physical layout and procedures, wouldn’t it?


How can simple web analytics help make your library’s website as joyful as an in-person experience?
Click To Tweet


Without web analytics, on the web, these lost souls are invisible. Analytics gives us the opportunity to intercept the poor, confused people wandering around your website. To build them an experience that they find intuitive and engaging. And get them to the beautiful, joyful materials and services they need.

So where do you start?

Analyzing traffic patterns and page views is a great place to start. In fact, many of the surveyed librarians said that they already look at these metrics. However, the magic really begins to happen when you look at how successful your users are at completing key workflows using conversion funnel analysis.

To analyze key workflows:

  1. Identify a key workflow (e.g., scheduling a consultation, searching the catalog, etc.)
  2. Identify each step/page in that workflow
  3. Tag pages and establish a baseline
  4. Identify points of failure and tweak the experience
  5. Measure for improvements against the baseline

And then, of course, repeat.

We do this all the time at OCLC. For example, when we added a “did you mean?” suggestion feature to searches within WorldCat Discovery. We moved the “Did you mean?” phrase from here:

screen_grab_one

to here:

screen_grab_two_post

We DOUBLED the number of clicks on that option. Because having the call-to-action up front is usually a good idea from a user experience standpoint. Sometimes it really can be that simple. Small tweaks can yield huge results for our users.

Treat your library site like another branch. Measure what works and what doesn’t. Then improve it a little bit, every time you make a change. Soon, your online presence will be just as much a place of joy and beauty as your most beloved physical space.

The post Build joy into your library’s website appeared first on OCLC Next.

by Cathy King at March 22, 2017 02:00 PM

March 21, 2017

Terry's Worklog

MarcEdit Update Notes

MarcEdit Update: All Versions

Over the past several weeks, I’ve been working on a wide range of updates related to MarcEdit. Some of these updates have dealt with how MarcEdit handles interactions with other systems, some of these updates have dealt with integrating the new bibframe processing into the toolkit, and some of these updates have been related to adding more functionality around the programs terminal programs and SRU support. In all, this is a significant update that required the addition of ~20k lines of code to the Windows version, and almost 3x that to the MacOs version (as I was adding SRU support). In all, I think the updates provide substantial benefit. The updates completed were as follows:

MacOS:

* Enhancement: SRU Support — added SRU support to the Z39.50 Client
* Enhancement: Z39.50/SRU import: Direct import from the MarcEditor
* Enhancement: Alma/Koha integration: SRU Support
* Enhancement: Alma Integration: All code needed to add Holdings editing has been completed; TODO: UI work.
* Enhancement: Validator: MacOS was using older code — updated to match Windows/Linux code (i.e., moved away from original custom code to the shared validator.dll library)
* Enhancement: MARCNext: Bibframe2 Profile added
* Enhancement: BibFrame2 conversion added to the terminal
* Enhancement: Unhandled Exception Handling: MacOS handles exceptions differently — I created a new unhandled exception handler to make it so that if there is an application error that causes a crash, you receive good information about what caused it.

Couple of specific notes about changes in the Mac Update.

Validation – the Mac program was using an older set of code that handled validation. The code wasn’t incorrect, but it was out of date. At some point, I’d consolidated the validation code into its own namespace and hadn’t updated these changes on the Mac side. This was unfortunate. Anyway, I spent time updating the process so the all versions now share the same code and will receive updates at the same pace.

SRU Support – I’m not how I missed adding SRU support to the Mac version, but I had. So, while I was updating ILS integrations to support SRU when available, I added SRU support to the MacOS.

BibFrame2 Support – One of the things I was never able to get working in MarcEdit’s Mac version was the Bibframe XQuery code. There were some issues with how URI paths resolved in the .NET version of Saxon. Fortunately, the new bibframe2 tools don’t have this issue, so I’ve been able to add them to the application. You will find the new option under the MARCNext area or via the command-line.

Windows/Linux:

* Enhancement: Alma/Koha integration: SRU Support
* Enhancement: MARCNext: Bibframe2 Profile added
* Enhancement: Terminal: Bibframe2 conversion added to the terminal.
* Enhancement: Alma Integration: All code needed to add Holdings editing has been completed; TODO: UI work.
Windows changes were specifically related to integrations and bibframe2 support. On the integrations side, I enabled SRU support when available and wrote a good deal of code to support holdings record manipulation in Alma. I’ll be exposing this functionality through the UI shortly. On the bibframe front, I added the ability to convert data using either the bibframe2 or bibframe1 profiles. Bibframe2 is obviously the default.

With both updates, I made significant changes to the Terminal and wrote up some new documentation. You can find the documentation, and information on how to leverage the terminal versions of MarcEdit at this location: The MarcEdit Field Guide: Working with MarcEdit’s command-line tools

Downloads can be picked up through the automated updating tool or from the downloads page at: http://marcedit.reeset.net/downloads

by reeset at March 21, 2017 03:28 PM

March 16, 2017

TSLL TechScans

Core Competencies for Cataloging and Metadata Librarians

The CaMMS Competencies and Education for a Career in Cataloging Interest Group presented Core Competencies for Cataloging and Metadata Professional Librarians at ALA Midwinter in Atlanta. The document supplements the American Library Association's Core Competencies in Librarianship. The document outlines Knowledge, Skill & Ability, and Behavioral Competencies and is meant to define a "baseline of core competencies for LIS professionals in the cataloging and metadata field."

Knowledge competencies are those providing understanding of conceptual models upon which cataloging standards are based. Skill & ability competencies include not just the application of particular skills and frameworks, but the also the ability to "synthesize these principles and skills to create cohesive, compliant bibliographic data that function within local and international metadata ecosystems. Behavioral competencies are those "personal attributes that contribute to success in the profession and ways of thinking that can be developed through coursework and employment experience."

Of particular note is emphasis on cultural awareness in the introductory section.  "Metadata creators must possess awareness of their own historical, cultural, racial, gendered, and religious worldviews ... Understanding inherent bias in metadata standards is considered a core competency for all metadata work."

Full text of the competencies document is available via ALA's institutional repository. Slides from the presentation at ALA Midwinter are also available.

by noreply@blogger.com (Jackie Magagnosc) at March 16, 2017 08:26 PM