Planet Cataloging

November 25, 2014

First Thus

ACAT Copyrighted contents note

Posting to Autocat

On 24/11/2014 22.20, Bilodeau, Robert wrote:

Has anyone ever heard about a contents note being copyrighted? That is, you couldn’t add a contents note in your bib record because in some ways it would infringe some copyright? If that situation exists, then the bib record containing this copyrighted contents note shouldn’t be available through Z39.50 because this protocol aims information retrieval, that is sharing of records.

I have read much more often of where the summary itself is a violation of copyright. From this page at Columbia, http://fairuse.stanford.edu/overview/fair-use/cases/, we read that someone paraphrased parts of unpublished materials from J.D. Salinger, and it was determined that the paraphrases were illegal. More recently, the German justice system found that Google was in violation of copyright with the Google News service, which displays the “snippets” (or summaries) of the actual news sites. They ordered Google to pay. It was interesting how it turned out however. Google decided not to pay, but rather to deindex German news sites from Google News. The hit rate plummeted for the German news sites, and they relented. http://techcrunch.com/2014/10/23/kapitulation/

Concerning the 520 summary note, perhaps a case could be made that it is “copyrighted” although it is by definition, a summary of some larger resource. So based on the above idea, the first question is: is the summary itself fair use, or does it break copyright? I think summary notes in a library catalog would be the very epitome of fair use–but when lawyers get involved, you never know.

The question whether a summary note itself can be copyrighted, most I have seen are actually copied verbatim from the blurbs written by the publishers. (http://en.wikipedia.org/wiki/Blurb) A blurb is a type of advertising that publishers actually want the public to read, e.g., the publishing blurb found in the record http://lccn.loc.gov/96002959. So, I would be surprised if somebody found fault with copying those. If the summary note is written originally by the cataloger, it would seem to me that it would have to be labelled as such somehow to distinguish it from all the others. (e.g. does the absence of the words “Provided by publisher” or something similar mean that it was originally written by a cataloger? I don’t think so)

Copyright is crazy, and gets crazier by the day. Everybody wants a piece of the action. I personally like Google’s way of doing it: add the snippets until somebody complains and if someone complains, just delete those.

FacebookTwitterGoogle+PinterestShare

by James Weinheimer at November 25, 2014 01:01 PM

November 24, 2014

Coyle's InFormation

Multi-Entity Models.... Baker, Coyle, Petiya

Multi-Entity Models of Resource Description in the Semantic Web: A comparison of FRBR, RDA, and BIBFRAME
by Tom Baker, Karen Coyle, Sean Petiya
Published in: Library Hi Tech, v. 32, n. 4, 2014 pp 562-582 DOI:10.1108/LHT-08-2014-0081
Open Access Preprint

The above article was just published in Library hi Tech. However, because the article is a bit dense, as journal articles tend to be, here is a short description of the topic covered, plus a chance to reply to the article.

We now have a number of multi-level views of bibliographic data. There is the traditional "unit card" view, reflected in MARC, that treats all bibliographic data as a single unit. There is the FRBR four-level model that describes a single "real" item, and three levels of abstraction: manifestation, expression, and work. This is also the view taken by RDA, although employing a different set of properties to define instances of the FRBR classes. Then there is the BIBFRAME model, which has two bibliographic levels, work and instance, with the physical item as an annotation on the instance.

In support of these views we have three RDF-based vocabularies:

FRBRer (using OWL)
RDA (using RDFS)
BIBFRAME (using RDFS)

The vocabularies use a varying degree of specification. FRBRer is the most detailed and strict, using OWL to define cardinality, domains and ranges, and disjointness between classes and between properties. There are, however, no sub-classes or sub-properties. BIBFRAME properties all are defined in terms of domains (classes), and there are some sub-class and sub-property relationships. RDA has a single set of classes that are derived from the FRBR entities, and each property has the domain of a single class. RDA also has a parallel vocabulary that defines no class relationships; thus, no properties in that vocabulary result in a class entailment. [1]

As I talked about in the previous blog post on classes, the meaning of classes in RDF is often misunderstood, and that is just the beginning of the confusion that surrounds these new technologies. Recently, Bernard Vatant, who is a creator of the Linked Open Vocabularies site that does a statistical analysis of the existing linked open data vocabularies and how they relate to each other, said this on the LOV Google+ group:
"...it seems that many vocabularies in LOV are either built or used (or both) as constraint and validation vocabularies in closed worlds. Which means often in radical contradiction with their declared semantics."
What Vatant is saying here is that many vocabularies that he observes use RDF in the "wrong way." One of the common "wrong ways" is to interpret the axioms that you can define in RDFS or OWL the same way you would interpret them in, say, XSD, or in a relational database design. In fact, the action of the OWL rules (originally called "constraints," which seems to have contributed to the confusion, now called "axioms") can be entirely counter-intuitive to anyone whose view of data is not formed by something called "description logic (DL)."

A simple demonstration of this, which we use in the article, is the OWL axiom for "maximum cardinality." In a non-DL programming world, you often state that a certain element in your data is limited to the number of times it can be used, such as saying that in a MARC record you can have only one 100 (main author) field. The maximum cardinality of that field is therefore "1". In your non-DL environment, a data creation application will not let you create more than one 100 field; if an application receiving data encounters a record with more than one 100 field, it will signal an error.

The semantic web, in its DL mode, draws an entirely different conclusion. The semantic web has two key principles: open world, and non-unique name. Open world means that whatever the state of the data on the web today, it may be incomplete; there can be unknowns. Therefore, you may say that you MUST have a title for every book, but if a look at your data reveals a book without a title, then your book still has a title, it is just an unknown title. That's pretty startling, but what about that 100 field? You've said that there can only be one, so what happens if there are 2 or 3 or more of them for a book? That's no problem, says OWL: the rule is that there is only one, but the non-unique name rule says that for any "thing" there can be more than one name for it. So when an OWL program [2] encounters multiple author 100 fields, it concludes that these are all different names for the same one thing, as defined by the combination of the non-unique name assumption and the maximum cardinality rule: "There can only be one, so these three must really be different names for that one." It's a bit like Alice in Wonderland, but there's science behind it.

What you have in your database today is a closed world, where you define what is right and wrong; where you can enforce the rule that required elements absolutely HAVE TO be there; where the forbidden is not allowed to happen. The semantic web standards are designed for the open world of the web where no one has that kind of control. Think of it this way: what if you put a document onto the open web for anyone to read, but wanted to prevent anyone from linking to it? You can't. The links that others create are beyond your control. The semantic web was developed around the idea of a web (aka a giant graph) of data. You can put your data up there or not, but once it's there it is subject to the open functionality of the web. And the standards of RDFS and OWL, which are the current standards that one uses to define semantic web data, are designed specifically for that rather chaotic information ecosystem, where, as the third main principle of the semantic web states, "anyone can say anything about anything."

I have a lot of thoughts about this conflict between the open world of the semantic web and the needs for closed world controls over data; in particular whether it really makes sense to use the same technology for both, since there is such a strong incompatibility in underlying logic of these two premises. As Vatant implies, many people creating RDF data are doing so with their minds firmly set in closed world rules, such that the actual result of applying the axioms of OWL and RDF on this data on the open web will not yield the expected closed world results.

This is what Baker, Petiya and I address in our paper, as we create examples from FRBRer, RDA in RDF, and BIBFRAME. Some of the results there will probably surprise you. If you doubt our conclusions, visit the site http://lod-lam.slis.kent.edu/wemi-rdf/ that gives more information about the tests, the data and the test results.

[1] "Entailment" means that the property does not carry with it any "classness" that would thus indicate that the resource is an instance of that class.

[2] Programs that interpret the OWL axioms are called "reasoners". There are a number of different reasoner programs available that you can call from your software, such as Pellet, Hermit, and others built into software packages like TopBraid.

by Karen Coyle (noreply@blogger.com) at November 24, 2014 11:23 AM

November 23, 2014

Resource Description & Access (RDA)

RDA Toolkit Release (April 22, 2014)

TOPIC 1: Changes in RDA Content
TOPIC 2: Change in Content in LC-PCC PSs
TOPIC 3: Functional Changes in the RDA Toolkit


TOPIC 1: Changes in RDA Content
There are two types of changes in the RDA content for this update: 1) the third annual major update to RDA based on the decisions made by the Joint Steering Committee for Development of RDA (JSC) at their November 2013 meeting; and 2)  “Fast Track” changes that are relatively minor and typical of a release update.
Revisions from JSC actions:
The attached document (Summary of 2014 rda updates.docx) identifies highlights from the changes to RDA due to the JSC update (see link below).  Many of the changes in this update package are due to re-numbering of instructions and references (without a change in actual content) and are not included in the attached listing.  The changes will appear with the “revision history” icon in the RDA Toolkit.  A complete listing of all changes due to the proposal process will appear in the left-side table of contents pane on the RDA tab in the toolkit, at the bottom under “RDA Update History”—you will see an additional entry there for the “2014 April Update.”  To help you focus on the more important changes to the instructions, some parts of the attached summary have been highlighted in yellow to draw your attention.










Relationship Designators for Contributors
I.3.1
The relationship designator for “editor of compilation” has been deleted, and the concept incorporated into a revised relationship designator for “editor.”




Fast Track changes
An attached PDF file (see link below) identifies the "Fast Track" changes to RDA that will be included in this release (6JSC-Sec-12-rev.pdf); Fast Track changes are not added to the RDA Update History.  Among the changes most likely to be of interest to LC staff:
7.26.1.3: The instruction has been changed from “transcribe the statement of projection” to “record the projection of cartographic content” because other cartographic content attributes are recorded.
There are several new and revised relationship designators for Appendix J including these:
opera adaptation of (work)   Reciprocal relationship: adapted as opera (work)
container of (work)  [replaces contains (work)]
music (work)  Reciprocal relationship: music for (work)
continuation in part of (work)   [replaces continues in part (work)]
replacement in part of (work)   [replaces supersedes in part (work)]
replacement of (work)     Reciprocal relationship: replaced by (work)  [replaces supersedes (work) and superseded by (work)]
merged to form (work)   [replaces merged with … to form … (work)]
There are several new and revised relationship designators for Appendix K including these:
member   [replaces group member]
family
corporate body  [replaces group member of]
component of a merger
corporate member
membership corporate body
predecessor of split
There are several new and revised glossary terms including these:
Exhibit
Illustration
Image File
Unnumbered Leaf
Unnumbered Page
TOPIC 2: Change in Content in LC-PCC PSs
A summary of LC-PCC PS updates incorporated in this release is attached (LCPCCPS_changes_2014_April.doc) (see link below).  Many of the changes to the LC-PCC PSs are related to RDA changes (re-numbering, new references, etc.).  Several PSs are being deleted because the content has been incorporated into RDA itself or the RDA update makes the PS obsolete (e.g., to remove reference to the PCC interim guidelines on treaties).  Significant changes to PSs you should be aware of:
9.19.1.2.6:  New statement to record LC practice/PCC practice for a new Optional addition.  For new authority records, catalogers may apply the option to supply “Other Designation Associated with a Person” in the authorized access point. For existing authority records, unless otherwise changing an existing heading (e.g., conflict, incorrect dates), do not change an existing AACR2 or RDA heading merely to add an “other designation”.
11.13.1.2:  Re-captioned to “Type of Corporate Body” due to changes in RDA; guidelines applying to access points formerly found in the Policy Statement at 11.7.1.4 have been moved here.  New alternative guidelines on using the spelled-out forms of a preferred name that is an initialism or acronym have been provided.
16.2.2.13 and 16.4:  Revised the U.S. Townships section in each of these PSs.
TOPIC 3: Functional Changes in the RDA Toolkit
An excerpt from ALA Publishing on the updates to the functionality of the RDA Toolkit with this release is found at the end of this email.
The next planned release of the RDA Toolkit will be in August 2014, although the update is most likely to impact functional changes to the Toolkit, and synchronization of translations.  The October 2014 release will include content updates for RDA and the LC-PCC PSs.
The documents attached to this email may also be found on the Web:
LC Summary of 2014 RDA Updates: http://www.loc.gov/aba/rda/added_docs.html  
Fast Track entries included in the April 2014 update of the RDA Toolkit: http://www.rda-jsc.org/docs/6JSC-Sec-12-rev.pdf
Changes in LC-PCC Policy Statements in the April 2014 release of the RDA Toolkit: http://www.loc.gov/aba/rda/lcps_access.html


[Source: Library of Congress]


by Salman Haider (noreply@blogger.com) at November 23, 2014 05:26 AM

RDA Tookit Release (October 14, 2014) : Changes in and Revision of Resource Description & Access and LC-PCC PS

A new release of the RDA Toolkit was published on Tuesday, October 14.  This message will cover several points you should be aware of related to the release. 

TOPIC 1: Changes in RDA Content
TOPIC 2: Change in Content in LC-PCC PSs
TOPIC 3: Additional Content in the RDA Toolkit

TOPIC 1: Changes in RDA Content

This update only contains  “Fast Track” changes that are relatively minor (these are not flagged in the RDA text with revision history icons).  The linked file 6JSC-Sec-13.pdf contains a complete listing of the Fast Track changes. You’ll note that many of the changes are to examples, including moving some examples to more appropriate instructions, replacing some examples, and adding initial articles to some preferred and variant titles, etc.—note that the addition of the initial articles are intended to exhibit the base instruction at RDA 6.2.1.7, and that LC/PCC practice is to  OMIT initial articles (per 6.1.2.7, Alternative, etc.), so do not interpret the revised examples as a policy change.

There are also some new and revised relationship designators for use in Appendices I, J, and K including these:

book artist
letterer
graphic novelization of (work)   Reciprocal relationship: adapted as graphic novel (work)
adapted as libretto (work)  [replaces basis for libretto (work)]
adapted as novel (work)  [replaces novelization (work)]
adapted in verse as (work)  [replaces verse adaptation (work)]
digested as (work)  [replaces digest (work)]
modified by variation as (work)  [replaces musical variations (work)]

TOPIC 2: Change in Content in LC-PCC PSs

A summary of LC-PCC PS updates incorporated in this release is linked (LCPCCPS_changes_2014_October.doc).  The changes are fairly minor, except for some revisions/new statements requested by the music cataloging community (e.g., 6.15.1.7, 6.18.1.4, 6.28.1.9.1, Alternative).  Some information previously held only in the Descriptive Cataloging Manual section Z1 has moved to policy statements (e.g., 9.16.1.3, 9.19.1.5 for profession and/or occupation). Another minor change is related to, well,  “minor changes”!  The PS for 11.2.2.5 introduces a new category for minor changes to corporate body names--the addition, omission, or fluctuation of a frequency word(e.g., annual, biennial) in a conference name.

TOPIC 3: Additional Content in the RDA Toolkit

This release will include the addition of British Library Policy Statements (BL PS). The BL PS icons will be set to display in the RDA text by default, but the links can be turned off in the Toolkit Settings portion of the My Profile page (if you have created your own profile).

The documents attached to this email may also be found on the Web:
[Source: Dave Reser, LC PSD]

[Note: Above message was addressed to Library of Congress catalogers, but it is also a good source for other libraries and cataloging librarians as well]

by Salman Haider (noreply@blogger.com) at November 23, 2014 05:26 AM

November 21, 2014

First Thus

ACAT Looking for a very short definition of authority control to give to a non-librarian

Posting to Autocat

On 11/21/2014 4:46 PM, Marc Truitt wrote:

On 2014-11-21 1106, Kay,Tina L (CONTR) – NHT-1 wrote:
I am struggling to figure out how to explain authority control in one (long?) sentence to someone who does not work in a library.  Unfortunately, my mind isn’t wrapping around this very complex activity in a way to describe it succinctly.
Any help would be appreciated!

How’s this?:
“Library authority control enables variants of names and titles to be linked to an *authorized* form, so that, for example, a reader can, without any advance knowledge, find the works of Samuel Langhorne Clemens, Quintus Curtius Snodgrass, and Thomas Jefferson Snodgrass all entered under the name of Mark Twain.”

This isn’t a good example because in the case of Mark Twain, you do have to search under all of those different names because of the post-AACR1 rules of “bibliographic identities” and the fact that Twain is considered to be a 20th-century author.

That aside, when there were card catalogs, it was much easier to explain it than today. And it made kind of an impact.

All you had to do was show them the card catalog with dozens or hundreds or thousands of drawers, each containing hundreds of cards, and let the huge numbers sink into their heads for a few seconds. Then show them one, single card for something by, e.g. Peter Tchaikovsky and ask them, “Where should I put this one card among all of these other cards so that people can find it? Under T? Or C? or Ch? Materials by him come out with all of these different forms of his name and many more besides. Should I just file this card under the form found on my item? And then do the same with each form on each other item? That would make me happy because it would make my work a lot easier; but that would also make it a lot harder for you because you would have to find out all those different forms. You’d have to do research to find the different forms of his name, and then you would have to run all around this catalog–just to find what we have by Peter Tchaikovsky! Either that or you could just start browsing the cards, one-by-one. Authority control puts the majority of that work on me so that you have that much less work to do.”

Of course, this assumes that the cross-references work in the catalog, and they don’t. (As I have pointed out several times, just because they appear in an alphabetical listing when someone does a left-anchored text browse, definitely does not mean they work)

Today with our “virtual catalogs,” authority control is just as important but far more abstract. I think relatively few non-catalog librarians really understand it today. Unfortunately, I think it is one of those concepts that is much easier to demonstrate than to explain.

FacebookTwitterGoogle+PinterestShare

by James Weinheimer at November 21, 2014 04:28 PM

Mod Librarian

Collie on Beach - Getty Images Embed

Collie on Beach - Getty Images Embed Tool

This is such a cool tool. Retains copyright and photographer credit. And, what is my collie doing walking on the beach?

View On WordPress

November 21, 2014 01:51 PM

November 20, 2014

First Thus

the joy of advanced cataloging

Posting to RadCat

On 11/20/2014 2:33 PM, Snow, Karen wrote:

I love Mann’s essay as well. It’s a good thing that I have all of my beginning cataloging students read that very essay and write a discussion post about it! As part of that assignment, I have them complete a search in I-Share that is similar to the one Mann talks about in “Peloponnesian War…” and discuss their search results. I tell them to pretend that they are college students looking for works on the therapeutic use of storytelling and they must search I-Share using a combination of “storytelling” “therapy” “therapeutic” “story telling” etc….). Then they must find the authorized LCSH for the topic and search again using it (“Narrative therapy”). Even those students who currently work in libraries say that the article and exercise are very eye-opening.

Even though I have had my debates with Thomas Mann, I do like much of his writing, and that includes this essay of his. But I question precisely where the problem is. Mann shows how complicated and difficult it is to use a catalog, but he goes on to lay out very clearly that if you use it right, it can do a lot for you. I wholeheartedly agree with him, but I’m afraid that it is becoming almost irrelevant for most people. Why?

Because the way catalogs work is based on methods that almost nobody does anymore. The methods are just too alien to 21st-century people: browsing by alphabetical order–truly obsolete in the era of keyword, relevance, sql, lucene, “intuitive search” and so on, but most especially–and weirdly for people today: in a catalog, people are not supposed to search for the information they want, but rather, they are supposed to search for how someone else (aka “the cataloger”) has decided to describe the information they want. That’s completely different and it is what Mann’s article on the Peloponnesian War is actually all about: finding terms that would never have occurred to the searcher in a thousand years, and using those terms to find the information they want.

I think it all made much more sense 25 years ago, when everyone was handling physical cards arranged in a card catalog, and where you couldn’t just take the cards out and rearrange them as you would like. To do something like that would have been *inconceivable*, but in our catalogs today, we do it all the time! So, back then it was pretty clear that you had to find the right grouping(s) that somebody else had already arranged, e.g. Mann’s example of someone who wants to know about tributes during the Peloponnesian War needs the heading “Finance, Public–Greece–Athens.” Who in the world could ever think of that?

Although the need to do so was rather clear back then, I believe that this way of thinking is too strange an idea for people to grasp today. When we try to teach young people to do this, we look like trudging old dinosaurs.

I think it is obvious that the catalog needs to change how it functions and Mann’s article is an excellent example of that (although I don’t believe that is what he intended). In my opinion, catalog-ing and catalog records do not need to change all that much though, because for catalogs to work even in the new environments, records must still be based on the over-riding rule of consistency. If you dump consistency, whatever is left might be called a listing, an inventory, an account, and so on, but it cannot be called a catalog.

And yet, if we expect that if a member of the public wants information, then it is their job to learn to follow Mann’s odyssey as laid out in his paper on searching the Peloponnesian War, then it’s game over! People won’t stand for that today and will turn (or have already turned) to other tools.

That’s why I asked in my earlier message: “Does it [advanced cataloging] mean cataloging within the current library-focused world of AACR2/RDA/LCSH/LCC/LCNAF/MARC21/FRBR or does it mean something else?” I know *lots* of people who would say that working with those tools is anything but advanced. I want to emphasize that I disagree with such a notion, but it is clear to me that the catalog must work much differently than it does now.

No matter how different the future catalog may work, the catalog records will still need to be consistent although–unfortunately–that seems to be changing.

For these reasons, I would say that in an advanced cataloging class it would be absolutely important to show how important consistency is, and how difficult it is to achieve, both in theory and in practice. But if you drop consistency, you must see the consequences very clearly. Also, people should become aware how various developments are threatening that consistency and what can be done about it. (I discussed this in my latest podcast, Metadata Creation–down and dirty. I just had to get in that plug!

FacebookTwitterGoogle+PinterestShare

by James Weinheimer at November 20, 2014 04:21 PM

Mod Librarian

5 Things Thursday: DAM LA, David Riecks, Taxonomy, Linked Data

Hello,

Here are yet another five things:

  1. Advanced Metadata “Snackinar” recording featuring David Riecks.
  2. Slideshare on Learning W3C Linked Data.
  3. Should we still care about Dublin Core?
  4. DAM LA is happening now. Watch this spot for interesting things…
  5. How to show related posts by taxonomy in WordPress. I should give this a shot…

View On WordPress

November 20, 2014 01:12 PM

November 19, 2014

TSLL TechScans

PCC BIBFRAME web page

Paul Frank, along with the PCC Secretariat, have created a new webpage, BIBFRAME and the PCC, to help librarians learn about the BIBFRAME initiative and understand development of a future bibliographic ecosystem. The creators hope that this page will function as a central source for information, documentation and updates on the PCC's involvement with BIBFRAME.

Of particular interest is a short paper, authored by Paul Frank, entitled BIBFRAME: Why? What? Who? describing the basics of BIBFRAME and why it is being developed.

by noreply@blogger.com (Jackie Magagnosc) at November 19, 2014 08:37 PM

November 18, 2014

First Thus

ACAT Qualifying filmed stage productions

Posting to Autocat

On 18/11/2014 2.56, Thomas, Kirsti wrote:

Several productions of Richard II have been done in the last few years, so I think it’s important to distinguish this “2013 RSC production starring David Tennant” from other versions like the “2012 BBC Two production starring Ben Whishaw” or the “2011/2012 Donmar Warehouse production starring Eddie Redmayne” or even the “1978 BBC Shakespeare production starring Derek Jacobi.” Our users are typically looking for specific versions by specific directors or with specific actors, so I come down on the side of providing a qualified uniform title. I guess the new RDA term for that is “Authorized Access Point Representing an Expression” ;)

If I understand correctly, this seems to be equating individual performances with expressions. This would be something new I think. For instance, in music if someone wants a copy of Beethoven’s 5th Symphony, they search for

Beethoven, Ludwig van, 1770-1827. Symphonies, no. 5, op. 67, C minor.

and then they select what they want from the different records. But you cannot further specify *within the heading* that I want one conducted by Von Karajan, or Bernstein. So, catalogers do not create headings with specific conductors such as:
Beethoven, Ludwig van, 1770-1827. Symphonies, no. 5, op. 67, C minor. Toscanini, Arturo, 1867-1957.

or with specific orchestras:
Beethoven, Ludwig van, 1770-1827. Symphonies, no. 5, op. 67, C minor. NBC Symphony Orchestra.

and we certainly do not create something like:
Beethoven, Ludwig van, 1770-1827. Symphonies, no. 5, op. 67, C minor. Toscanini, Arturo, 1867-1957. NBC Symphony Orchestra. Carnegie Hall, March 22, 1952.

All of that information goes into the *record*, but not in the heading. The catalog itself is supposed to provide that access but people actually had to do some work. In a traditional catalog, such as in the LC catalog, we can see how it has always worked. This is a search for the uniform title for Beethoven’s Fifth, and people are still expected to examine each record to choose which one they want.

Of course, it’s easier with keyword searches. Even the search for the specific performance works in Worldcat:
Beethoven, Ludwig van, 1770-1827. Symphonies, no. 5, op. 67, C minor. Toscanini, Arturo, 1867-1957. NBC Symphony Orchestra. Carnegie Hall, March 22, 1952

Things also change in faceted catalogs. Here is the search for just the uniform title in Worldcat

Today with the facets, we can see different conductors: Furtwangler, Toscanini, etc., or you can limit by date (i.e. date of production, not date of performance). Facets can be made from any fields in the record. This means that it would be easy enough to make corporate bodies display in the facets so that you could limit by NBC Symphony Orchestra (if it has been put into the record), which for some reason do not do so now. Everything can be changed or improved in almost any way someone would want.

On the other hand, a translation of a libretto of an opera warrants a new expression, e.g.

Wagner, Richard, 1813-1883. Ring des Nibelungen. Libretto. English

Changing the idea of the expression for works of the performing arts (theater, film, music, etc.) so that an expression is determined not only by the author and the piece of music (Beethoven, 5th symphony) but also by the performer(s) and perhaps even by the individual performance is an interesting idea. I know that when searching music on YouTube for, e.g. a Rolling Stones song, I want the Rolling Stones and not something recorded by little Johnny’s garage band. Or, I may want something *very specific* such as the Stones’ “Under My Thumb” but not just any one. I want a specific performance: the one at Altamont in 1969 where the Hell’s Angels killed the spectator and many things in society changed after that. This was a historic and important performance–not just any performance by the Stones.

It works in Google for the actual performance! https://www.google.it/search?q=rolling+stones+under+my+thumb+altamont
(it ends just before the violence at the end. Not a very good performance but they were all obviously very unhappy)

For catalogers to give that kind of access through formal headings would be quite a bit more work than what we do now. Prudence dictates that we should first determine if the extra effort is warranted (and sustainable!), especially when it can be demonstrated that people can find these materials right now in other ways. I think we should just let the catalog work its magic, so that when we search for keywords, we get it (as happens now) or put IT to improve the facets. That would be a lot cheaper and easier than adding zillions of new “expressions”.

FacebookTwitterGoogle+PinterestShare

by James Weinheimer at November 18, 2014 08:37 PM

Coyle's InFormation

Classes in RDF

RDF allows one to define class relationships for things and concepts. The RDFS1.1 primer describes classes succinctly as:
Resources may be divided into groups called classes. The members of a class are known as instances of the class. Classes are themselves resources. They are often identified by IRIs and may be described using RDF properties. The rdf:type property may be used to state that a resource is an instance of a class.
This seems simple, but it is in fact one of the primary areas of confusion about RDF.

If you are not a programmer, you probably think of classes in terms of taxonomies -- genus, species, sub-species, etc. If you are a librarian you might think of classes in terms of classification, like Library of Congress or the Dewey Decimal System. In these, the class defines certain characteristics of the members of the class. Thus, with two classes, Pets and Veterinary science, you can have:
Pets
- dogs
- cats

Veterinary science
- dogs
- cats
In each of those, dogs and cats have different meaning because the class provides a context: either as pets, or information about them as treated in veterinary science.

For those familiar with XML, it has similar functionality because it makes use of nesting of data elements. In XML you can create something like this:
<drink>
    <lemonade>
        <price>$2.50</price>
        <amount>20</amount>
    </lemonade>
    <pop>
        <price>$1.50</price>
        <amount>10</amount>
    </pop>
</drink>
and it is clear which price goes with which type of drink, and that the bits directly under the <drink> level are all drinks, because that's what <drink> tells you.

Now you have to forget all of this in order to understand RDF, because RDF classes do not work like this at all. In RDF, the "classness" is not expressed hierarchically, with a class defining the elements that are subordinate to it. Instead it works in the opposite way: the descriptive elements in RDF (called "properties") are the ones that define the class of the thing being described. Properties carry the class information through a characteristic called the "domain" of the property. The domain of the property is a class, and when you use that property to describe something, you are saying that the "something" is an instance of that class. It's like building the taxonomy from the bottom up.

This only makes sense through examples. Here are a few:
1. "has child" is of domain "Parent".

If I say "X - has child - 'Fred'" then I have also said that X is a Parent because every thing that has a child is a Parent.

2. "has Worktitle" is of domain "Work"

If I say "Y - has Worktitle - 'Der Zauberberg'" then I have also said that Y is a Work because every thing that has a Worktitle is a Work.

In essence, X or Y is an identifier for something that is of unknown characteristics until it is described. What you say about X or Y is what defines it, and the classes put it in context. This may seem odd, but if you think of it in terms of descriptive metadata, your metadata describes the "thing in hand"; the "thing in hand" doesn't describe your metadata. 

Like in real life, any "thing" can have more than one context and therefore more than one class. X, the Parent, can also be an Employee (in the context of her work), a Driver (to the Department of Motor Vehicles), a Patient (to her doctor's office). The same identified entity can be an instance of any number of classes.
"has child" has domain "Parent"
"has licence" has domain "Driver"
"has doctor" has domain "Patient"

X - has child - "Fred"  = X is a Parent 
X - has license - "234566"  = X is a Driver
X - has doctor - URI:765876 = X is a Patient
Classes are defined in your RDF vocabulary, as as the domains of properties. The above statements require an application to look at the definition of the property in the vocabulary to determine whether it has a domain, and then to treat the subject, X, as an instance of the class described as the domain of the property. There is another way to provide the class as context in RDF - you can declare it explicitly in your instance data, rather than, or in addition to, having the class characteristics inherent in your descriptive properties when you create your metadata. The term used for this, based on the RDF standard, is "type," in that you are assigning a type to the "thing." For example, you could say:
X - is type - Parent
X - has child - "Fred"
This can be the same class as you would discern from the properties, or it could be an additional class. It is often used to simplify the programming needs of those working in RDF because it means the program does not have to query the vocabulary to determine the class of X. You see this, for example, in BIBFRAME data. The second line in this example gives two classes for this entity:
<http://bibframe.org/resources/FkP1398705387/8929207instance22>
a bf:Instance, bf:Monograph .

One thing that classes do not do, however, is to prevent your "thing" from being assigned the "wrong class." You can, however, define your vocabulary to make "wrong classes" apparent. To do this you define certain classes as disjoint, for example a class of "dead" would logically be disjoint from a class of "alive." Disjoint means that the same thing cannot be of both classes, either through the direct declaration of "type" or through the assignment of properties. Let's do an example:
"residence" has domain "Alive"
"cemetery plot location" has domain "Dead"
"Alive" is disjoint "Dead" (you can't be both alive and dead)

X - is type - "Alive"                                         (X is of class "Alive")
X - cemetery plot location - URI:9494747      (X is of class "Dead")
Nothing stops you from creating this contradiction, but some applications that try to use the data will be stumped because you've created something that, in RDF-speak, is logically inconsistent. What happens next is determined by how your application has been programmed to deal with such things. In some cases, the inconsistency will mean that you cannot fulfill the task the application was attempting. If you reach a decision point where "if Alive do A, if Dead do B" then your application may be stumped and unable to go on.

All of this is to be kept in mind for the next blog post, which talks about the effect of class definitions on bibliographic data in RDF.

by Karen Coyle (noreply@blogger.com) at November 18, 2014 10:39 AM

November 17, 2014

Resource Description & Access (RDA)

RDA Blog Reaches 200000 Pageviews

Hi all, I am pleased to announce that RDA Blog has crossed 200000 pageviews mark. It is interesting to note that the first hundred thousand pageviews came in 3 years, but it took just 8 months to reach another hundred thousand pageviews.
Thanks all for your love, support and suggestions. Please post your feedback and comments on RDA Blog Guest Book. Select remarks will be posted on RDA Blog Testimonials page.

click on image to enlarge


INTRODUCTION TO RDA BLOG:


RDA Blog is a blog on Resource Description and Access (RDA), a new library cataloging standard that provides instructions and guidelines on formulating data for resource description and discovery, organized based on the Functional Requirements for Bibliographic Records (FRBR), intended for use by libraries and other cultural organizations replacing Anglo-American Cataloging Rules (AACR2). This blog lists description and links to resources on Resource Description & Access (RDA). It is an attempt to bring together at one place all the useful and important information, rules, references, news, and links on Resource Description and Access, FRBR, FRAD, FRSAD, MARC standards, AACR2, BIBFRAME, and other items related to current developments and trends in library cataloging practice. 

              RDA BLOG HIGHLIGHTS IN 1-MINUTE VIDEO PRESENTATION              

by Salman Haider (noreply@blogger.com) at November 17, 2014 09:56 PM

First Thus

ACAT RDA Training for Reference Services?

On 16/11/2014 22.25, Callie Blackmer wrote:

This is an assignment for my Cataloging and Classification course so any thoughts would be greatly appreciated:

What is your understanding of the RDA cataloging standards? I came across an article by Teressa M. Keenan (permalink below) in which she discusses how references services in the library are affected by the shift to RDA standards and why it is important for reference librarians to understand how RDA works so that they are better equipped to direct patrons through the catalog. Do you agree with Keenen, is it important to reference services for non-cataloger librarian to learn cataloging standards?

My own thoughts on this are first: of course, professionals in any field need to learn their tools and to keep current with changes. If you are a dentist, a doctor, a lawyer, a faculty member, a mechanic, a butcher, or in almost any field, all of them are changing and each professional must keep up with those changes, whether you agree with them or not. Librarians–be they in cataloging or in reference–are no different. They must all stay current on what is going on in their field.

Nevertheless, I would say that cataloging has gotten such a bad rap, especially in the last few decades, that it is very difficult for lots of non-catalogers to see the importance of any changes. There has always been a divide between reference and cataloging, but from my experience it has gotten more serious. I have heard several reference librarians say that the problem *is* the catalog, and when you add people in systems departments, there are even more challenges. Lots of people know about stories such as “Thinking the unthinkable: a library without a catalogue: Reconsidering the future of discovery tools for Utrecht University library” (http://libereurope.eu/news/thinking-the-unthinkable-a-library-without-a-catalogue-reconsidering-the-future-of-discovery-tools-for-utrecht-university-library/) and “Giving up on discovery” (http://taiga-forum.org/giving-up-on-discovery/). For all different kinds of reasons, I think that it will be difficult to convince many non-catalogers that the changes in cataloging rules are going to have any major impact on the day-to-day activities of users.

Let’s take a very normal example that happens quite literally every day: someone wants an article that they have found cited somewhere. How do you find it? The traditional method says: when you find the citation, note down the name of the journal. (If you don’t have the name of the journal, it is practically impossible to find it) Then you go to the catalog, look for the name of the journal to see if the library has it. If it does, then look at the holdings to see if the library has the exact issue you want. If you can’t find it or you have problems, ask a reference librarian. There were always problems with that: complicated records (maybe you are looking at an earlier or later name of the title; maybe a wrong form of title was cited; key titles always confused people), there are terribly complicated holdings statements and so on and on.

What is the best way of doing it today? It is completely different, and you don’t even have to use the catalog. To take an example from the article above, “Giving up on discovery” http://taiga-forum.org/giving-up-on-discovery/, the first comment is by Peter Murray and he says “I also recommend looking back at David W. Lewis’ A Strategy for Academic Libraries in the First Quarter of the 21st Century”. While he gives the citation and a link here lots of others do not.

Now however, all you have to do is highlight the name and title “David W. Lewis A Strategy for Academic Libraries in the First Quarter of the 21st Century” (you don’t need the journal title) then right click and search Google automatically, and you get some great results. (At least I do) The very first is the actual article, and the second is something perhaps even more important: it is David W. Lewis’ Google Scholar page, where you can see more of his writings, plus (very important!) I learn that this article was cited 101 times and I click on those articles right now! The later articles may be even more important to me than this original one.

What does the searcher need to know? Mechanical skills: select text and right click. I can’t imagine anything much easier and there is no comparison with the older methods. It’s also nice if the users know that it is possible to add different search engines and how to do it. Of course, this method doesn’t work all the time, but it works a lot of the time and will work more and more often as more materials come online. I think it should be one of the first methods tried. If it fails, OK: try something else.

Compare this to users searching the library catalog for the individual article. Either they won’t find it (because journal articles are not in there) or there is the “single search box” syndrome, which mashes everything together and has its own problems. (I have discussed this at length in an earlier post http://blog.jweinheimer.net/2014/10/consistency-was-conflicting-instructions-in-bib-formats-about-etds-being-state-government-publications.html)

What I am trying to say from all of this is that while it is very important for reference librarians to keep up with changes in cataloging so they can use it in their practice, the opposite is just as true: catalogers should be learning and adapting to changes among the users, and this is best done through communication with the reference librarians. The world of research is changing in fundamental ways, as is the overwhelming importance of the catalog. The catalog is still immensely important, but it too must adapt to the new realities.

I am sure we are only at the very beginnings of the changes in catalogs–and not all of them will necessarily be for the better.

FacebookTwitterGoogle+PinterestShare

by James Weinheimer at November 17, 2014 02:24 PM

November 16, 2014

The Feral Cataloger

cbtarsala

Note: As part of a marketing campaign for my proposed classification textbook, I prepared this introduction to BISAC for cataloging students. The original plan was to give it out as a freebie at ALA in Las Vegas to promote the book. Sadly, neither Las Vegas nor the book happened. I am posting it here for the greater public good.

Foreword

Confession: I love DDC. Before I started to research BISAC I wasn’t very impressed with it. Now I have a healthy respect for it. It’s a good scheme for what it does, but I’m afraid that many people who are promoting it are doing so for commercial purposes.

At the end of this section you will be asked to answer the key question: evaluate BISAC as a classification using the process in this chapter. [note: not included in this excerpt]  That’s what the professional should do.

General Background on BISAC

Ditch Dewey! Undo Dewey! Again and again you will read news reports about librarians who are replacing Dewey Decimal Classification in their collections, all the while making awful rhymes and puns as they do (do-ey!) it. At conferences anti-Dewey advocates will sometimes pitch their alternate systems. More often they will promote a system called BISAC. BISAC stands for “Book Industry Standards and Communications.” It is the subject category system used in bookstores. Because BISAC has become more mainstream in the past decade, you might someday work at a library that will debate whether to use it or not.

BISAC is a list of subject headings that are used to express the topical content of books. In a formal information science context, you would call them “descriptors.” There are over 3000 BISAC subject headings available, and they are arranged under fifty-one major headings. Only the major headings have scope notes and usage information.

The BISAC subject headings are hierarchical strings. Here is an example: PETS/Dogs/Breeds. PETS is the major heading, and it is the hierarchical relationship in the string that classifies the concept. The hierarchy is limited to two or three subdivisions below the major heading. PETS/Dogs/Breeds is the most specific level for Dogs. There are no subject headings for particular dog breeds.

Take a look at all the subject headings under the major heading PETS.

PET000000          PETS / General

PETS / Amphibians see Reptiles, Amphibians & Terrariums

PETS / Aquarium see Fish & Aquariums
PET002000          PETS / Birds
PET003000          PETS / Cats / General
PET003010          PETS / Cats / Breeds

PETS / Cooking for Pets see COOKING / Pet Food

PET004000          PETS / Dogs / General
PET004010           PETS / Dogs / Breeds
PET004020          PETS / Dogs / Training
PET010000          PETS / Essays & Narratives
PET005000          PETS / Fish & Aquariums
PET012000          PETS / Food & Nutrition *
PET006000          PETS / Horses
PET011000           PETS / Rabbits, Mice, Hamsters, Guinea Pigs, etc.
PET008000          PETS / Reference
PET009000          PETS / Reptiles, Amphibians & Terrariums

Here are some things to notice:
• The subject headings are arranged alphabetically under each major heading.[1]
• If a subject heading has subdivisions, there is always a heading ending with “/ General.” Therefore, you see PETS / Birds covering all books about birds, but PETS / Dogs / General for the books about dogs that are not about Breeds and Training.
• Each descriptor has a unique code number, but the code notation is not expressive.
• The code starts with three letters to represent the major heading followed by a six-digit number.
• The numbers of the codes are not related to the alphabetical order of the subject headings. However, they do express the hierarchical level of the descriptor. Compare the code for PETS / Dogs / General and PETS / Dogs / Breeds.
• An asterisk marks a newly-added subject heading.
• There are two kinds of cross-references. One kind leads you to another subject heading under the same major heading. Another kind sends you to a different major heading.

(The information above is from the http://www.bisg.org’s subject headings faq. Accessed 1/8/14)

BISAC in Its Native Habitat

BISAC comes from the Book Industry Study Group’s Subject Codes Committee. The Committee updates BISAC every year, and you can view the current edition online at the BISG website. American and Canadian publishers assign the subject headings as part of a complete metadata record that is used to market the book.

As happens any living classification scheme, the annual update of the descriptors indicates that the scheme is getting more detailed and expanding. BISG guidelines ask publishers to go through the change list every year and update the categories to the most current. If you are using BISAC as a shelf arrangement tool, this is something you must monitor and respond to in order to keep your browsing categories up-to-date.

BISAC also offers “extensions” that target specific audiences. There are “Merchandising Themes” for groups of people, events, holidays and topics. Examples of Merchandising Themes are CULTURAL HERITAGE / Asian / Korean or EVENT / Back to School or HOLIDAY / St. Patrick’s Day or TOPICAL / Boy’s Interest. BISG has recently developed an extension for Regional Themes, and it is discussing a new extension for Common Core. Some of these extensions will have relevance for libraries, but currently only the regular subject headings are included in library catalog records.

Another example of BISAC’s growth is the committee’s development of  “Regional Themes” classification, which allows publishers to add a seven-digit hierarchical code to the record, allowing them to specify the location about which the work is written. The enumerated codes were only assigned to places which have “more than 100 titles” about them. So you will not find many codes in the seventh position, which represents borough/neighborhood/district. The only cities where they are used now are for parts of New York City or Los Angeles.

Example: 4.0.1.6.3.1.1 = Beverly Hills. Los Angeles. California. Western & Pacific States. USA. [Zero is an undefined part of the higher level area. In this case, that's the continent]. North America.

It may be interesting for you to learn how a major retailer like Amazon uses BISAC. Self-publishers of books on amazon are told to choose “up to 2 categories” from BISAC. In certain sub-categories (Romance, Science Fiction & Fantasy, Children’s, Teen & Young Adult, Mystery, Thriller & Suspense, Comics & Graphic Novels, Literature & Fiction, and Erotica), Amazon requires “search keywords” to be added. These are Amazon-specific descriptors.

For example, if you choose the BISAC subject heading FICTION/Romance/Paranormal/Witches & Wizards, you must supply at least one of the following keywords: witch, wizard, warlock, druid, shaman. Amazon also has some BISAC-like subject headings of its own. Romance/Sports is an example, and it also requires you to choose one of the following additional keywords: sport, hockey, soccer, baseball, basketball, football, olympics, climbing, lacrosse, nascar, surfing, boxing, martial arts, golf. In Amazon advanced search you can search for the keywords and add the BISAC subject heading from a drop-down list. This gives you additional power for genre fiction searches. To test it, I did an Amazon search and came up with over one hundred golf romances!

In the future you may see publishers using multiple subject classifications in their metadata records, because BISAC is only one of many available to them. Outside of North America, English-language publishers use a different classification called Book Industry Communication (BIC). There are also other national book industry schemes, and there is a new, multilingual, international subject classification called Thema that was released at the Frankfurt book fair in September 2013. Even though North American publishers will continue to use BISAC, it is important for you to remember that BISAC exists in a landscape of international marketing of books.

What to Expect from BISAC Metadata

BISG’s Metadata Committee gives publishers instructions about how to apply the subject headings in their manual, Product Metadata Best Practices. Everything in the human-crafted element comes from the publisher—an editor or a “marketing department associate.” If these people are following the guidelines in BISG’s best practices, here is what you can expect in accepting downstream subject headings from them.

There will be a “main subject.” Beyond that, BISG recommends “no more than three” and that number is confirmed by info from the large publisher Random House. (Andrea Bachofen Via Random House Random Notes) There are guidelines that encourage the most specific fit. Editors are warned not to add general headings as well as specific, and not to assign conflicting exclusive classifications. In particular, you cannot have a book carry both juvenile audience and non-juvenile headings. Publishers should map their in-house categories to BISAC, and most bookstores map their floor plans to BISAC on the other end. With this consistency in place, the headings may be used to identify category best-sellers.

You should remember that there is an absolutely unbridgeable divide between juvenile and adult subject headings in BISAC. As a classifier you must choose between one or the other, if comparable headings exist. You cannot add both of them. The strings and the codes are completely different. JNF015000 JUVENILE NONFICTION/Crafts & Hobbies should not be assigned with CRA043000 CRAFTS & HOBBIES/Crafts for Children or CRA023000/CRAFTS&HOBBIES/Origami.

There is a weird heading called “Non-classifiable.” Non-classifiable is used only for blank books, those decoratively bound things that people buy to use as notebooks or journals.

Where can you get BISAC subject headings for the works in your library?

• Off the Book: Publishers assign BISAC codes to their products according to to their own internal standards, and BISG encourages them to put the headings near the bar code in an easy-to- spot location for bookstore owners as they arrange their stock.
• Increasingly, catalog records may include the codes or the descriptors. The Library of Congress started adding them routinely to Electronic CIP. WebDewey includes BISAC headings as an access method to DDC numbers, so you can switch back and forth at some hierarchical levels.

BISAC in MARC Records

Publishers do not use MARC to encode their metadata. The publishing standard for metadata encoding is ONIX, which must be crosswalked into MARC databases. Some libraries take ONIX records directly from the publisher to load into their catalogs, especially for e-books, which don’t have readily available copy in the library source databases. For new books with cataloging-in-publication metadata, however, the Library of Congress crosswalks BISAC into MARC records.

BISAC and other bookseller codes are added to MARC21 records in field 084. You will not see any of the subject headings in a record when you load it into your local catalog because it’s coded as a classification. However, it is possible to generate the headings on display with the information in the $2 subfield (source of data) and a list of the BISAC codes. (You must agree to the EULA to include BISAC in your catalog). A growing percentage of the total records in Worldcat have 084 fields. 40 million out of 311 million in the database have field 084, which is used for any additional classification scheme beyond DDC or LCC. When 084is used for BISAC, it would be in new releases through downstream ONIX metadata from major publishers.

The 084 field is a small but mighty field that is easy to overlook because it is not easy to decode by reading. However, LC/pcc records do seem to add it if the metadata is readily available. However, it seems unlikely that major agencies are assigning BISAC as a routine part of their cataloging workflow if it must be generated locally or researched on amazon or some other database. Some catalogs strip BISAC as part of their copy cataloging processes so the MARC record stored contains only the library classification field. This is something you must investigate and trouble-shoot if you want to switch to BISAC.

Review the ideas.

Discussion question: Is BISAC a formal classification system, as defined in the previous section? How does it rate on the evaluation?

Working with BISAC: For the following book, evaluate the assignment of its BISAC subjects using the BISAC subject heading list online and information about correct assignment included in this post. Are they correctly or incorrectly chosen by the publisher? If incorrect, what would be better choices?

Cat Sense: How the New Feline Science Can Make You a Better Friend to Your Pet by biologist John Bradshaw presents scientific information about domestic cats to a popular audience.
Here are the Library of Congress Subject Headings found in its catalog record:
• Cats—Behavior.
• Cats—Psychology.
• Human-animal relationships.
• Cat owners.

Here are the BISAC subject headings found in its catalog record:
• PETS / Cats / General
• SCIENCE / Life Sciences / Zoology / Mammals
• SCIENCE / Life Sciences / General

Answer: The last two are wrong. SCIENCE is used for works aimed at professionals, and you would never use the general heading if you have a more specific one because the more general is implicit.
NAT019000 NATURE/Animals/Mammals would be preferred as the second subject heading for this work.

Bradshaw’s other book is The Behaviour of the Domestic Cat 2nd ed.(John W S Bradshaw, Rachel Casey, Sarah Brown (Wallingford: CABI, 2012). It is aimed at his fellow scientists/zoologists/anthrozoologists. Both of Bradshaw’s books are classed the same in DDC and LCC. 636.8 in DDC (cats); SF446.5 in LCC (Animal culture — Pets — Cats — Behavior) Neither DDC nor LCC distinguishes between academic and popular treatments of subjects.

In the example above, also note the difference between the BISAC and LCSH strings. LCSH does not include higher levels of a hierarchy in its headings. With LCSH you search for terms directly and specifically, but cannot see a term’s place in the overall structure of the vocabulary without access to a cross-reference structure. These are issues you want to consider when you include both vocabularies as subject headings in your catalog records.

I end with the same advice that I started with: If your library is seriously considering BISAC as a replacement for traditional library classification and subject access, you must evaluate it critically and carefully, because it is not a simple, universal substitute for library-specific classification.


by Cheryl Boettcher Tarsala at November 16, 2014 06:43 PM

Slide17

It was gratifying to have a thousand people register for my free webinar on cuttering, and then to see over five hundred of them log in when I was presenting.  It shows the continuing need for continuing education in traditional cataloging knowledge. Or maybe it shows that lots of people attend when free webinars are offered. No matter.

Here’s a summary of what I covered:

Are you curious about Cutters? Maybe a little confused? This free webinar will reveal how Cutter’s alphanumeric book numbering systems work. You will learn how to recognize different types of cutter numbers and how to construct them for yourself.

  • Principles of alphanumeric numbering systems
  • Types of Cuttering
    • Cutter Two-Figure, Three-Figure and Cutter-Sanborn
    • Cutters for Library of Congress Classification
  • How to Use the Cutter-Sanborn Table
  • Different Uses for Cuttering in Library of Congress Classification
  • Basic Use of the LC Cutter Table

If you weren’t able to attend the webinar, here’s a link to the session recording. The slides are available at SlideShare. And here’s the resource sheet with all the links that I refer to in the presentation.

Thanks to ALA Editions for hosting it.Slide17


by Cheryl Boettcher Tarsala at November 16, 2014 03:21 PM

November 13, 2014

Mod Librarian

5 Things Thursday: More DAM, Portland Art Museum, NYPL, Viewshare

Here are 5 more things:

  1. Criteria for shopping for an appropriate digital asset management system.
  2. NYPL explores “The Networked Catalog.”
  3. Learn about new features in Viewshare visualization software.
  4. How digital asset management helps museums.
  5. A case study on the Portland Art Museum and Extensis Portfolio.

View On WordPress

November 13, 2014 01:16 PM