Planet Cataloging

August 04, 2015

OCLC Cataloging and Metadata News

IFLA 2015

Join your colleagues at OCLC information sessions during IFLA 2015

August 04, 2015 01:00 PM

August 03, 2015

025.431: The Dewey blog

International Dewey Users Meeting at IFLA

The International Dewey Users Meeting will be held in conjunction with the World Library and Information Conference (IFLA) Cape Town, South Africa, on Tuesday 18 August 8:00-9:30 am at the OCLC Hospitality Suite, in the Conference Center Room 1.41/1.42. Learn what’s new with Dewey! Hear from Elise Conradi, Dewey Project Manager at the National Library of Norway, who will discuss the latest developments in Terminology Mapping and Peter Werling, CEO of Pansoft, who will provide an update about Dewey software. Also share ideas and notes with translation partners.

Register for this and other OCLC IFLA Events.

by Juli at August 03, 2015 05:31 PM

CommonPlace.Net

Maps, dictionaries and guidebooks

Interoperability in heterogeneous library data landscapes

Maps, dictionaries, guidebooks

Libraries have to deal with a highly opaque landscape of heterogeneous data sources, data types, data formats, data flows, data transformations and data redundancies, which I have earlier characterized as a “data maze”. The level and magnitude of this opacity and heterogeneity varies with the amount of content types and the number of services that the library is responsible for. Academic and national libraries are possibly dealing with more extensive mazes than small public or company libraries.

In general, libraries curate collections of things and also provide discovery and delivery services for these collections to the public. In order to successfully carry out these tasks  they manage a lot of data. Data can be regarded as the signals between collections and services.

These collections and services are administered using dedicated systems with dedicated datastores. The data formats in these dedicated datastores are tailored to perform the dedicated services that these dedicated systems are designed for. In order to use the data for delivering services they were not designed for, it is common practice to deploy dedicated transformation procedures, either manual ones or as automated utilities. These transformation procedures function as translators of the signals in the form of data.

Here lies the origin of the data maze: an inextricably entangled mishmash of systems with explicit and

© Ron Zack

© Ron Zack

implicit data redundancies using a number of different data formats, some of which systems are talking to each other in some way. This is not only confusing for end users but also for library system staff. End users lack clarity about user interfaces to use, and are missing relevant results from other sources and possible related information. Libraries need licenses and expertise for ongoing administration, conversion and migration of multiple systems, and suffer unforeseen consequences of adjustments elsewhere.

To take the linguistic analogy further, systems make use of a specific language (data format) to code their signals in. This is all fine as long as they are only talking to themselves. But as soon as they want to talk to other systems that use a different language, translations are needed, as mentioned. Sometimes two systems use the same language (like MARC, DC, EAD), but this does not necessarily mean they can understand each other. There may be dialects (DANMARC, UNIMARC), local colloquialisms, differences in vocabularies and even alphabets (local fields, local codes, etc.). Some languages are only used by one system (like PNX for Primo). All languages describe things in their own vocabulary. In the systems and data universe there are not many loanwords or other mechanisms to make it clear that systems are talking about the same thing (no relations or linked data). And then there is syntax and grammar (such as subfields and cataloguing rules) that allow for lots of variations in formulations and formats.

Translation does not only require applying a dictionary, but also interpretation of the context, syntax, local variations and transcriptions. Consequently much is lost in translation.lostintranslation

The transformation utilities functioning as translators of the data signals suffer from a number of limitations. They translate between two specific languages or dialects only. And usually they are employed by only one system (proprietary utilities). So even if two systems speak the same language, they probably both need their own translator from a common source language. In many cases even two separate translators are needed if source and target system do not speak each other’s language or dialect. The source signals are translated to some common language which in turn is translated into the target language. This export-import scenario, which entails data redundancy across systems, is referred to as ETL (Extract Transform Load). Moreover, most translators only know a subset of the source and target language dependent on the data signals needed by the provided services. In some cases “data mappings” are used as conversion guides. This term does not really cover what is actually needed, as I have tried to demonstrate. It is not enough to show the paths between source and target signals. It is essential to add the selections and transformations needed as well. In order to make sense of the data maze you need a map, a dictionary and a guidebook.

To make things even more complicated, sometimes reading data signals is only possible with a passport or visa (authentication for access to closed data). Or even worse, when systems’ borders are completely closed and no access whatsoever is possible, not even with a passport. Usually, this last situation is referred to with the term “data silos”, but that is not the complete picture. If systems are fully open, but their data signals are coded by means of untranslatable languages or syntaxes, we are also dealing with silos.

Anyway, a lot of attention and maintenance is required to keep this Tower of Babel functioning. This practice is extremely resource-intensive, costly and vulnerable. Are there any solutions available to diminish maintenance, costs and vulnerability? Yes there are.

First of all, it is absolutely crucial to get acquainted with the maze. You need a map (or even an atlas) to be able to see which roads are there, which ones are inaccessible, what traffic is allowed, what shortcuts are possible, which systems can be pulled down and where new roads can be built. This role can be fulfilled by a Dataflow Repository, which presents an up-to-date overview of locations and flows of all content types and data elements in the landscape.

Secondly it is vital to be able to understand the signals. You need a dictionary to be able to interpret all signals, languages, syntaxes, vocabularies, etc. A Data Dictionary describing data elements, datastores, dataflows and data formats is the designated tool for this.

And finally it is essential to know which transformations are taking place en route. A guidebook should be incorporated in the repository, describing selections and transformations for every data flow.

You could leave it there and be satisfied with these guiding tools to help you getting around the existing data maze more efficiently, with all its ETL utilities and data redundancies. But there are other solutions, that focus on actually tackling or even eliminating the translation problem. Basically we are looking at some type of Service Oriented Architecture (SOA) implementation. SOA is a rather broad concept, but it refers to an environment where individual components (“systems”) communicate with each other in a technology and vendor agnostic way using interoperable building blocks (“services”). In this definition “services” refer to reusable dataflows between systems, rather than to useful results for end users. I would prefer a definition of SOA to mean “a data and utilities architecture focused on delivering optimal end user services no matter what”.

Broadly speaking there are four main routes to establish a SOA-like condition, all of which can theoretically be implemented on a global, intermediate or local level.

  1. Single Store/Single Format: A single universal integrated datastore using a universal data format. No need for dataflows and translations. This would imply some sort of linked (open) data landscape with RDF as universal language and serving all systems and services. A solution like this would require all providers of relevant systems and databases to commit to a single universal storage format. Unrealistic in the short term indeed, but definitely something to aim for, starting at the local level.
  2. Multiple Stores/Shared Format: A heterogeneous system and datastore landscape with a universal communication language (a lingua franca, like English) for dataflows. No need for countless translators between individual systems. This universal format could be RDF in any serialization. A solution like this would require all providers of relevant systems and databases to commit to a universal exchange format. Already a bit less unrealistic.
  3. Shared Store/Shared Format: A heterogeneous system and datastore landscape with a central shared intermediate integrated datastore in a single shared format. Translations from different source formats to only one shared format. Dataflows run to and from the shared store only. For instance with RDF functioning as Esperanto, the artificial language which is actually sometimes used as “Interlingua” in machine translation. A solution like this does not require a universal exchange format, only a translator that understands and speaks all formats, which is the basis of all ETL tools. This is much more realistic, because system and vendor dependencies are minimized, except for variations in syntax and vocabularies. The platform itself can be completely independent.
  4. Multiple Stores/Single Translation Pool: or what is known as an Enterprise Service Bus (ESB). No translations are stored, no data is integrated. Simultaneous point to point translations between systems happen on the fly. Looks very much like the existing data maze, but with all translators sitting together in one cubicle. This solution is not a source of much relief, or as one large IT vendor puts it: “Using an ESB can become problematic if large volumes of data need to be sent via the bus as a large number of individual messages. ESBs should never replace traditional data integration like ETL tools. Data replication from one database to another can be resolved more efficiently using data integration, as it would only burden the ESB unnecessarily.”.

Overlooking the possible routes out of the data maze, it seems that the first step should be employing the map, dictionary and guidebook concept of the dataflow repository, data dictionary and transformation descriptions. After that the only feasible road on the short term is the intermediate integrated Shared Store/Shared Format solution.

by Lukas Koster at August 03, 2015 02:51 PM

August 02, 2015

Terry's Worklog

MarcEdit Mac Preview Update

MarcEdit Mac users, a new preview update has been made available.  This is getting pretty close to the first “official” version of the Mac version.  And for those that may have forgotten, the preview designation will be removed on Sept. 1, 2015.

So what’s been done since the last update?  Well, I’ve pretty much completed the last of the work that was scheduled for the first official release.  At this point, I’ve completed all the planned work on the MARC Tools and the MarcEditor functions.  For this release, I’ve completed the following:

****************************
** 1.0.9 ChangeLog
****************************

  • Bug Fix: Opening Files — you cannot select any files but a .mrc extension. I’ve changed this so the open dialog can open multiple file types.
  • Bug Fix: MarcEditor — when resizing the form, the filename in the status can disappear.
  • Bug Fix: MarcEditor — when resizing, the # of records per page moves off the screen.
  • Enhancement: Linked Data Records — Tool provides the ability to embed URI endpoints to the end of 1xx, 6xx, and 7xx fields.
  • Enhancement: Linked Data Records — Tool has been added to the Task Manager.
  • Enhancement: Generate Control Numbers — globally generates control numbers.
  • Enhancement: Generate Call Numbers/Fast Headings – globally generated call numbers/fast headings for selected records.
  • Enhancement: Edit Shortcuts — added back the tool to enabled Record Marking via a comment.

Over the next month, I’ll be working on trying to complete four other components prior to the first “official” release Sept. 1.  This means that I’m anticipating at least 1, maybe 2 more large preview releases before Sept. 1, 2015.  The four items I’ll be targeting for completion will be:

  1. Export Tab Delimited Records Feature — this feature allows users to take MARC data and create delimited files (often for reporting or loading into a tool like Excel).
  2. Delimited Text Translator — this feature allows users to generate MARC records from a delimited file.  The Mac version will not, at least initially, be able to work with Excel or Access data.  The tool will be limited to working with delimited data.
  3. Update Preferences windows to expose MarcEditor preferences
  4. OCLC Metadata Framework integration…specifically, I’d like to re-integrate the holdings work and the batch record download.

How do you get the preview?  If you have the current preview installed, just open the program and as long as you have the notifications turned on – the program will notify that an update is available.  Download the update, and install the new version.  If you don’t have the preview installed, just go to: http://marcedit.reeset.net/downloads and select the Mac app download.

If you have any questions, let me know.

–tr

by reeset at August 02, 2015 11:42 PM

August 01, 2015

Resource Description & Access (RDA)

Inaccuracies in RDA

Please see Inaccuracies [RDA Blog post revised with Question & Answer on 2015-07-28]

Resource Description & Access (RDA)

Please provide your comments on this interpretations of RDA Rules, as mentioned in the Question & Answer part of this RDA Blog post.

<<<<<---------->>>>>
Comments:

Comment by Bob Kosovsky, Curataor, Rare Books and Manuscripts, Music Division, The New York Public Library, New York, United States:
Bob Kosovsky Oh! I thought it was about things wrong in RDA. Rather, it's about how to deal with inaccuracies in cataloging materials. smile emoticon

<<<<<-----Revised 2015-07-30----->>>>>

by Salman Haider (noreply@blogger.com) at August 01, 2015 02:29 AM

Numbering of Serials in RDA Cataloging

Resource Description & Access (RDA)

Numbering of Serials

  • Numeric and/or alphabetic designation of first issue or part of sequence, chronological designation of first issue or part of sequence, numeric and/or alphabetic designation of last issue or part of sequence, and chronological designation of last issue or part of sequence are CORE ELEMENTS. Other numbering is optional.
P         Look at instruction 2.6.1

Numbering of serials is the identification of each of the issues or part of a serial. It may include a numeral, a letter, a character, or the combination of these with or without an accompanying caption (volume, number, etc.) and/or a chronological designation (RDA 2.6.2-2.6.5).

Recording Numbering of Serials
  • Record numbers expressed as numerals or as words applying the general guidelines given under 1.8. Transcribe other words, characters, or groups of words and/or characters as they appear on the source of information. Apply the general guidelines on transcription given under 1.7.  Substitute a slash for a hyphen, as necessary, for clarity.
  • Record the number for the first issue; if it has ceased publication, record the last issue
  • If the numbering starts a new sequence with a different system, record the numbering of the first issue of each sequence and the numbering of the last issue of each sequence.
Examples:
362 0# $a Volume X, number 1-          (formatted style)
362 1# $a Began with January 2010 issue (unformatted style) 


[Source: Library of Congress]

<<<<<---------->>>>>

Comments:


Aaron Kuperman The biggest problem with serials is that the "oral traditions" of serial catalogers are such that even when a work clearly has a single creator (and should get a 100 entry), and is cited that way in reference sources and by users of the catalog, the serial catalogers insist the person is a mere editor and therefore a contributor (only a 700 entry) -- and as more on more monographs are being cataloged as serials/continuing resources, we are losing access to the most important access point (n.b. Cutter's first rule that the catalog needs to provide access to works by the name of the author). Under RDA, serial catalogers should make 100 heading entries for author who create the work - which is what RDA says to do, and which they don't.
[Aaron Kuperman is a Law cataloger at Library of Congress, Washington D.C.]

<<<<<-----Revised 2015-07-31----->>>>>

[This RDA Blog post is best viewed in Google Chrome web browser]


Thanks all for your love, suggestions, testimonials, likes, +1, tweets and shares ....

See also related posts in following RDA Blog Categories (Labels):

by Salman Haider (noreply@blogger.com) at August 01, 2015 12:43 AM

July 31, 2015

First Thus

ACAT Repeating 520?

On 7/1/2015 12:43 AM, J. McRee Elrod wrote:
> On either Autocat or RDA-L someone asked about repeating 520s in order > to have paragraphs. I answered that SLC only uses repeating 520s for > multiple descriptions, e.g., a set or kit.
>
> I suggested hyphens as in 505. In an offlist message John Marr > suggests using 520$b to create a break; $b can only be used once per > 520, and is for a fuller description than in $a.

For those who have access to the html/css coding of their catalogs, there is a value “white-space:pre-wrap” that preserves the line breaks. with “white-space:pre-wrap” the text wraps both on line breaks and when the browser window requires it.

What this means is, you could just put in line breaks where you want them in the 520, e.g. Line 1.[return]Line 2.[return]Line 3.

and with the correct style, it can display as:
Line 1.
Line 2.
Line 3.

I made a “fiddle” at https://jsfiddle.net/ouv4h56z/1/ where you can see it in action and even work with it yourselves. The top-left division (labelled HTML) has the code and you can make it bigger to see everything. You can put any text in you want to see how it works. In fact, all the divisions can be resized and it’s kind of fun to see how it works when you make the right hand divisions more narrow.

Here are the other values for “white-space” http://www.w3schools.com/cssref/pr_text_white-space.asp

This is yet another reason why open source is so important for libraries today! A simple 10-second fix to a problem.

James Weinheimer weinheimer.jim.l@gmail.com
First Thus http://blog.jweinheimer.net
First Thus Facebook Page https://www.facebook.com/FirstThus
Personal Facebook Page https://www.facebook.com/james.weinheimer.35 Google+ https://plus.google.com/u/0/+JamesWeinheimer
Cooperative Cataloging Rules http://sites.google.com/site/opencatalogingrules/
Cataloging Matters Podcasts http://blog.jweinheimer.net/cataloging-matters-podcasts The Library Herald http://libnews.jweinheimer.net/

FacebookTwitterGoogle+PinterestShare

by James Weinheimer at July 31, 2015 07:23 AM

July 30, 2015

Mod Librarian

5 Things Thursday: Graph Databases, Google, DAM and Mobile DAM

Here are five more things:

  1. Why graph databases are the future.
  2. Demystifying the Google Knowledge Graph.
  3. Digital Asset Management (DAM) Fact Sheet.
  4. Like the idea of mobile DAM? So does Canto (and others)…
  5. Another DAM Podcast interview with Fred Robertson from Bose.

View On WordPress

July 30, 2015 12:02 PM

Terry's Worklog

MarcEdit 6 Updates

I hadn’t planned on putting together an update for the Windows version of MarcEdit this week, but I’ve been working with someone putting the Linked Data tools through their paces and came across instances where some of the linked data services were not sending back valid XML data – and I wasn’t validating it.  So, I took some time and added some validation.  However, because the users are processing over a million items through the linked data tool, I also wanted to provide a more user friendly option that doesn’t require opening the MarcEditor – so I’ve added the linked data tools to the command line version of MarcEdit as well. 

Linked Data Command Line Options:

The command line tool is probably one of those under-used and unknown parts of MarcEdit.  The tool is a shim over the code libraries – exposing functionality from the command line, and making it easy to integrate with scripts written for automation purposes.  The tool has a wide range of options available to it – and for users unfamiliar with the command line tool – they can get information about the functionality offered by querying help.  For those using the command line tool – you’ll likely want to create an environmental variable pointing to the MarcEdit application directory so that you can call the program without needing to navigate to the directory.  For example, on my computer, I have an environmental variable called: %MARCEDIT_PATH% which points to the MarcEdit app directory.  This means that if I wanted to run the help from my command line for the MarcEdit Command Line tool, I’d run the following and get the following results:

C:\Users\reese.2179>%MARCEDIT_PATH%\cmarcedit -help
***************************************************************
* MarcEdit 6.1 Console Application
* By Terry Reese
* email: reeset@gmail.com
* Modified: 2015/7/29
***************************************************************
Arguments:
        -s:     Path to file to be processed.
                        If calling the join utility, source must be files
                        delimited by the ";" character
        -d:     Path to destination file.
                          If call the split utility, dest should specify a fold
r
                        where split files will be saved.
                        If this folder doesn't exist, one will be created.
        -rules: Rules file for the MARC Validator.
        -mxslt: Path to the MARCXML XSLT file.
        -xslt:  Path to the XML XSLT file.
        -batch: Specifies Batch Processing Mode
        -character:     Specifies character conversion mode.
        -break: Specifies MarcBreaker algorithm
        -make:  Specifies MarcMaker algorithm
        -marcxml:       Specifies MARCXML algorithm
        -xmlmarc:       Specifics the MARCXML to MARC algorithm
        -marctoxml:     Specifies MARC to XML algorithm
        -xmltomarc:     Specifies XML to MARC algorithm
        -xml:   Specifies the XML to XML algorithm
        -validate:      Specifies the MARCValidator algorithm
        -join:  Specifies join MARC File algorithm
        -split: Specifies split MARC File algorithm
        -records:       Specifies number of records per file [used with split c
mmand].
        -raw:   [Optional] Turns of mnemonic processing (returns raw data)
        -utf8:  [Optional] Turns on UTF-8 processing
        -marc8: [Optional] Turns on MARC-8 processing
        -pd:    [Optional] When a Malformed record is encountered, it will modi
y the process from a stop process to one where an error is simply noted and a s
ub note is added to the result file.
        -buildlinks:    Specifies the Semantic Linking algorithm
This function needs to be paired with the -options parameter
        -options        Specifies linking options to use: example: lcid,viaf:lc
oclcworkid,autodetect           lcid: utilizes id.loc.gov to link 1xx/7xx data
                autodetect: autodetects subjects and links to know values
                oclcworkid: inserts link to oclc work id if present
                viaf: linking 1xx/7xx using viaf.  Specify index after colon. I
 no index is provided, lc is assumed.
                        VIAF Index Values:
                        all -- all of viaf
                        nla -- Australia's national index
                        vlacc -- Belgium's Flemish file
                        lac -- Canadian national file
                        bnc -- Catalunya
                        nsk -- Croatia
                        nkc -- Czech.
                        dbc -- Denmark (dbc)
                        egaxa -- Egypt
                        bnf -- France (BNF)
                        sudoc -- France (SUDOC)
                        dnb -- Germany
                        jpg -- Getty (ULAN)
                        bnc+bne -- Hispanica
                        nszl -- Hungary
                        isni -- ISNI
                        ndl -- Japan (NDL)
                        nli -- Israel
                        iccu -- Italy
                        LNB -- Latvia
                        LNL -- Lebannon
                        lc -- LC (NACO)
                        nta -- Netherlands
                        bibsys -- Norway
                        perseus -- Perseus
                        nlp -- Polish National Library
                        nukat -- Poland (Nukat)
                        ptbnp -- Portugal
                        nlb -- Singapore
                        bne -- Spain
                        selibr -- Sweden
                        swnl -- Swiss National Library
                        srp -- Syriac
                        rero -- Swiss RERO
                        rsl -- Russian
                        bav -- Vatican
                        wkp -- Wikipedia

        -help:  Returns usage information

The linked data option uses the following pattern: cmarcedit.exe –s [sourcefile] –d [destfile] –buildlinks –options [linkoptions]

As noted above in the list, –options is a comma delimited list that includes the values that the linking tool should query.  A user, for example, looking to generate workids and uris on the 1xx and 7xx fields using id.loc.gov – the command would look like:

<< cmarcedit.exe –s [sourcefile] –d [destfile] –buildlinks –options oclcworkid,lcid

Users interesting in building all available linkages (using viaf, autodetecting subjects, etc. would use:

<< cmarcedit.exe –s [sourcefile] –d [destfile] –buildlinks –options oclcworkid,lcid,autodetect,viaf:lc

Notice the last option – viaf. This tells the tool to utilize viaf as a linking option in the 1xx and the 7xx – the data after the colon identifies the index to utilize when building links.  The indexes are found in the help (see above).

Download information:

The update can be found on the downloads page: http://marcedit.reeset.net/downloads or using the automated update tool within MarcEdit.  Direct links:

Mac Port Update:

Part of the reason I hadn’t planned on doing a Windows update of MarcEdit this week is that I’ve been heads down making changes to the Mac Port.  I’ve gotten good feedback from folks letting me know that so far, so good.  Over the past few weeks, I’ve been integrating missing features from the MarcEditor into the Port, as well as working on the Delimited Text Translation.  I’ll now have to go back and make a couple of changes to support some of the update work in the Linked Data tool – but I’m hoping that by Aug. 2nd, I’ll have a new Mac Port Preview that will be pretty close to completing (and expanding) the initial port sprint. 

Questions, let me know.

–tr

by reeset at July 30, 2015 04:39 AM

July 28, 2015

Resource Description & Access (RDA)

LC RDA Implementation of Relationship Designators in Bibliographic Records

RDA RELATIONSHIP DESIGNATORS
RDA RELATIONSHIP DESIGNATORS

Library of Congress Implementation of Resource Description and Access Relationship Designators in Bibliographic Records with MARC 21 RDA Cataloging Examples, Guidelines, and Best Practices


Key points
  • The training manual is only for relationship designators in bibliographic records.
  • The requirement for providing relationship designators is only applicable to creators, whether coded in MARC 1XX or 7XX.
  • Relationship designators for Person-Family-Corporate Body  should not be used in a name/title access point tagged 7XX or 8XX, even if they are creators.
  • If the nature of the relationship cannot be ascertained even at a general level, do not assign a relationship designator (if in doubt, leave it out).
  • Other relationship designators are encouraged.
  • The element name, e.g., “creator”, may be used as a relationship designator when you can’t determine a more appropriate relationship designator.
  • The training manual also provides a useful section on “Punctuation and Capitalization”.
  • Relationship designators in RDA may change so always search Appendix I or J for the correct term before assigning one.
Application
The new policy applies to newly completed RDA records, not to routine maintenance of already completed records, or to non-RDA records.

Monographs:  apply to full level original RDA records being coded 042 “pcc”.  It is encouraged, but not required, to apply to minimal level cataloging or imported records treated as “copycat” or “pccadap” in 906$c.  Note that relationship designators are “passed through” if present in copied records, unless egregiously incorrect.

Serials and Integrating Resources: apply to all CONSER authenticated records.

Implementation date:  July 1, 2015”

Best Practices 
  • Using relationship designators for other types of relationships (for example, contributor relationships), is strongly encouraged. 
  • Include a relationship designator, even if it repeats a term used as a qualifier to the name. 
  • Consult RDA Appendix I.2.1: Relationship Designators for Creators. Remember that the relationship designators that are used with creators are on the list in RDA Appendix I.2.1, not on the lists in I.2.2 or I.3.1. 
  • It is recommended that PCC catalogers use relationship designators from the RDA appendices. If the term needed is not there, use the Fast Track PCC relationship designator proposal form to propose a new term or request a revision of an existing term. 

General Guidelines

Guideline 1: Use of this Training Manual 
This training manual is intended to be used as a resource when applying relationship designators in RDA bibliographic records. It does not apply to authority records.

Guideline 2: Sources for Relationship Designators 
It is recommended that PCC catalogers use relationship designators from the RDA appendices. If the term needed is not there, use the PCC relationship designator proposal form to propose a new term or request a revision of an existing term. 

If a PCC cataloger wishes to use a term from a different registered vocabulary (e.g., MARC relator terms, RBMS relationship designators, etc.), he/she may do so.

Guideline 3: Specificity 
Within a hierarchy of relationship designators, prefer a specific term to a general one if it is easily determined. For example, use librettist rather than author for the creator of a libretto, or lyricist rather than author for the creator of the words for songs in a musical.

Guideline 4: RDA Element Name as Relationship Designator 
Assign an RDA element name as a relationship designator (e.g., "creator" (19.2) or "publisher" (21.3)) if it will most appropriately express the relationship. 

However, do not propose RDA element names for inclusion in RDA relationship designator lists. 

Guideline 5: Unclear Relationship 
If the nature of the relationship cannot be ascertained even at a general level, do not assign a relationship designator.

Guideline 6: Adding a Relationship Designator to Existing Terms and/or Codes 
Do not evaluate or edit older codes or terms in cataloging records unless they are clearly in error. Add an RDA relationship designator following an existing term, and before any existing code.

Guideline 7: Applying Relationship Designators in Accordance with their Definitions 
Be careful to apply relationship designators in accordance with their definitions. For example, note the difference between artist and illustrator. If the definitions or the hierarchies appear to be problematic, propose changes to them. Fast Track procedures are in process. See the PCC Relationship Designator Proposal Form

Guideline 8: Access Point and Relationship Designator for an Entity Not Named in a Resource 

In general, it is not necessary to provide access points for related entities not named in the resource. However, other sources of information may be consulted to identify related entities and determine the nature of their relationship to the resource.

Guidelines for Appendix I Relationship Designators 

Guideline 9: Relationship Designators for All Access Points 
PCC highly encourages including relationship designators for all access points whenever it is clear what the relationship is. 

Guideline 10: More than One Relationship Designator Appropriate 
If more than one relationship designator is appropriate because the same entity has multiple roles, preferably use repeating $e (or $j for MARC X11 fields). If necessary, multiple headings may be used instead. Add relationship designators in WEMI order. 

Guideline 11: Relationship Designators for Families and Corporate Bodies 
Note that the relationship designators in RDA Appendix I may be applied to families and corporate bodies as well as to individuals.

Guideline 12: Relationship Designators and Name/Title Access Points in 7XX 
Appendix I relationship designators should not be used in a name/title access point tagged MARC 700- 711 or 800-811, or in a name/title linking field tagged MARC 76X-78X.

Guidelines for Appendix J Relationship Designators 

Guideline 13: Relationship Designators for Resource-to-Resource Relationships 
The use of relationship designators for resource-to-resource relationships is encouraged.

Guideline 14: Relationship Designator Implied by MARC 7XX Content Designation 
If a cataloger wishes to indicate a known relationship to a known resource, and the $i relationship information subfield is defined for the MARC 7XX field being used, provide a relationship designator. Do so even if the field coding otherwise already expresses a relationship.

Guideline 15: Multiple Relationships 
Where multiple relationships exist, e.g., an abridged translation, provide separate access points, each with a single relationship designator in a single $i subfield. Alternatively, identify one relationship as primary and record that relationship alone.

Guideline 16: Reciprocal Relationships for Sequential Works and/or Expressions 
Except in the case of sequential work or expression relationships and equivalent manifestation relationships for serials, it is not necessary to provide reciprocal relationship fields.

Guideline 17: Relationship Designator for Related Resource when MARC 130 or 240 is Present 
Catalogers may add a 7XX field with a relationship designator referring to a specific related resource even if a 130 or 240 field is already present implying that they are versions of the same work.

Guideline 18: Unknown or Uncertain Relationship in a Resource 
If there is reason to believe that the resource being cataloged is related to another resource, but the resource in question cannot be identified (e.g., in the case of an expression that is believed to be a translation but the original is unknown), give the information in a note.

Guideline 19: Related Resource with Same Principally Responsible Creator 
When constructing a reference to a related resource sharing the same principally responsible creator as the resource being described, record the authorized access point for the related entity in a 700/710/711/730 author-title access point explicitly naming the creator in its $a, rather than a 740 title entry with an implied relationship to the 1XX in the same record. 

Guideline 20: Unstructured Descriptions 
For unstructured descriptions it is not necessary to indicate the WEMI level at which the relationship is asserted. 

Punctuation and Capitalization 

Designators that Follow Authorized Access Points 
Relationship designators that follow authorized access points are not capitalized and are always preceded by a comma, unless the authorized access point ends in an open date. 

Designators that Precede Authorized Access Points or Appear at the Beginning of a Field 
Relationship designators that precede authorized access points or that appear at the beginning of a field are capitalized and are followed by a colon.


Your Browser doesn't support for Iframe !



Relationship Designators in RDA: Connecting the Dots 

Your Browser doesn't support for Iframe !

[Source: Adam L. Schiff, Principal Cataloger, University of Washington Libraries]


See also related RDA Blog posts:

<<<<<<<<<<---------->>>>>>>>>>

Comments:

Comment by Bob Kosovsky, Curataor, Rare Books and Manuscripts, Music Division, The New York Public Library, New York, United States:
Bob Kosovsky Actually the latest blog entry has a nice detailed entry on relationship designators. I couldn't find them choosing between the codes or the full designation, but they speak of the codes only in the past tense, and considering RDA's general drift to spell things out, I take that to mean one should spell out the designator.

Comment by Robln Fay, Metadata, Web, & Social Media Consultant for libraries & beyond
RobIn Fay yes, I agree with Bob Kosovsky. I would not abbreviate the relationship designators. Now, in theory (in theory), machines should be (should be) smart enough to figure out abbreviations in context, but they can't really yet. The best you can do is program the options and with a fast enough machine it will run through the choices so fast it will seem that it thinks (vs probability). So something like: $e ed. maps to $e editor

Thanks all for your love, suggestions, testimonials, likes, +1, tweets and shares ....

See also related posts in following RDA Blog Categories (Labels):

by Salman Haider (noreply@blogger.com) at July 28, 2015 04:19 AM

July 27, 2015

TSLL TechScans

LC makes BIBFRAME training materials available

In preparation for its much-anticipated BIBFRAME cataloging pilot project, the Library of Congress has developed training materials for staff involved in the pilot, and made the first of three modules available online at http://www.loc.gov/catworkshop/bibframe/. Module one is divided into two sets of slides, plus supplementary reading/viewing assignments and brief quizzes. The training materials are designed for experienced catalogers and do not assume prior knowledge of linked data concepts.

The first set of slides provides a brief introduction to the concepts behind the Sematic Web and linked data, and the evolution of the World Wide Web from a web of documents to a web of data. It explains the need to move bibliographic data out of its MARC silo and onto the Semantic Web.

The second set of slides delves into the principles underlying RDF (Resource Description Framework), the “language of the Web.” Detailed, clearly presented examples of RDF triples provide a concrete visualization of what bibliographic data structured in RDF looks like.

Although I found a number of typos in the slides (I AM a cataloger, after all!), I found the training materials very helpful in confirming and deepening my knowledge of linked data and the Semantic Web. 

by noreply@blogger.com (Jean Pajerek) at July 27, 2015 01:28 PM

July 25, 2015

Terry's Worklog

Code4LibMW 2015 Write-up

Whew – it’s be a wonderfully exhausting past few days here in Columbus, OH as the Libraries played host to Code4LibMW.  This has been something that I’ve been looking forward to ever since making the move to The Ohio State University; the C4L community has always been one of my favorites, and while the annual conference continues to be one of the most important meetings on my calendar – it’s within these regional events where I’m always reminded why I enjoy being a part of this community. 

I shared a story with the folks in Columbus this week.  As one of the folks that attended the original C4L meeting in Corvallis back in 2006 (BTW, there were 3 other original attendees in Columbus this week), there are a lot of things that I remember about that event quite fondly.  Pizza at American Dream, my first experience doing a lightening talk, the joy of a conference where people were writing code as they were standing on stage waiting their turn to present, Roy Tennant pulling up the IRC channel while he was on stage, so he could keep an eye on what we were all saying about him.  It was just a lot of fun, and part of what made it fun was that everyone got involved.  During that first event, there were around 80 attendees, and nearly every person made it onto the stage to talk about something that they were doing, something that they were passionate about, or something that they had been inspired to build during the course of the week.  You still get this at times at the annual conference, but with it’s shear size and weight, it’s become much harder to give everyone that opportunity to share the things that interest them, or easily connect with other people that might have those same interests.  And I think that’s the purpose that these regional events can serve. 

By and large, the C4L regional events feel much more like those early days of the C4L annual conference.  They are small, usually free to attend, with a schedule that shifts and changes throughout the day.  They are also the place where we come together, meet local colleagues and learn about all the fantastic work that is being done at institutions of all sizes and all types.  And that’s what the C4LMW meeting was for me this year.  As the host, I wanted to make sure that the event had enough structure to keep things moving, but had a place for everyone to participate.  For me – that was going to be the measure of success…did we not just put on a good program – but did this event help to make connections within our local community.  And I think that in this, the event was successful.  I was doing a little bit of math, and over the course of the two days, I think that we had a participation rate close to 90%, and an opportunity for everyone that wanted to get up and just talk about something that they found interesting.  And to be sure – there is a lot of great work being done out here by my Midwest colleagues (yes, even those up in Michigan Smile).

Over the next few days, I’ll be collecting links and making the slides available via the C4LMW 2015 home page as well as wrapping up a few of the last responsibilities of hosting an event, but I wanted to take a moment and again thank everyone that attended.  These types of events have never been driven by the presentations, the hosts, or the presenters – but have always been about the people that attend and the connections that we make with the people in the room.  And it was a privilege this year to have the opportunity to host you all here in Columbus. 

Best,

–tr

by reeset at July 25, 2015 02:17 AM

July 23, 2015

TSLL TechScans

Conversations about RDA

LJ INFOdocket reports that the Library of Congress has released an new series of training videos, "Conversations about RDA".   Topics include:
  • Compare and contrast: AACR2 and RDA in the bibliographic record
  • Undifferentiated personal name headings
  • Cataloger judgement and statement of responsibility
  • Capitalization, abbreviations & numbers
  • Exercising judgment in the statement of responsiblity
The videos average 20 minutes and provide focused looks at a topical areas. The videos are linked from the Library of Congress Webcast page within the Science and Technology category.

by noreply@blogger.com (Jackie Magagnosc) at July 23, 2015 07:34 PM

Mod Librarian

5 Things Thursday: DAM Workflow, Beautiful Libraries, Radical Posters

5 Things Thursday: DAM Workflow, Beautiful Libraries, Radical Posters

Here are five more things for you:

  1. “Workflow. It’s like a buzzword without any buzz.” – from Jim Kidwell’s article Digital Asset Management Workflow: The Unsung Hero of DAM.
  2. Check out this archive of posters documenting radical history from the University of Michigan Library.
  3. Another list of the world’s most beautiful libraries and The Seattle Public Library’s Central Library is on there again.
  4. I…

View On WordPress

July 23, 2015 12:01 PM

July 22, 2015

Local Weather

I've not written here in ages. But today at work we had a great moment: we shipped out our first big collection, the Andrew Smith Gallery archives. It was the first collection we brought into our new space and now it is the first big collection out. 14 pallets of boxes coming in; 12 pallets of boxes, and several other containers going out.

The space, the stuff, and the staff: this is what makes the Beinecke Technical Services operation at 344 Winchester the wonder that it is.

There were 466 boxes on those pallets. The represent a large chunk (the rest went in bins) of the Andrew Smith Gallery records, a collection of business records from an important dealer in photography of the American West. We moved it here from the LSF Processing Space on 14 pallets soon after we moved in, and in the space of two and a half months the accessioning team processed 710 boxes. An impressive feat. It shows what we are able to achieve in our new space.

Below is a photo of the forklift coming into the same hallway as above. Nice to be able to drive the lift between this hallway and the loading dock.
This made moving twelve pallets to the loading dock and onto trucks bound for the Library Shelving Facility much easier. This instead of  pushing freezer carts across the Beinecke Plaza.

I congratulate and celebrate Mike Rush, Tina Evans, Leigh Golden, Jim Fisher, Jenn Garcia, Karen Nangle, and several student workers.

Thank you to all.

by MLB (noreply@blogger.com) at July 22, 2015 03:33 PM