Planet Cataloging

August 28, 2015

First Thus

ACAT Recording place of publication

On 28/07/2015 21.25, J. McRee Elrod wrote:

> Since we are no longer trying to squeeze data onto a 3 x 5 in. card, > why ever omit jurisdiction? A city well know in one place may not be > universally so known. Just supply it, and don’t waste time wondering > how well known the city is. That is a subjective judgement, and of > course will vary from cataloguer to cataloguer, even in the same > institution. That way lies inconsistency.

The opposite question can also be asked: why ever include jurisdiction?

It all goes back to Lubetzky’s “Is this rule necessary?” which I think will have to be asked by the present generation sooner or later. In our age of efficiency management, organizational restructuring and decreasing budgets (which do not seem to be improving substantially anytime soon), we need to find out for which constituency/constituencies each bit of information is necessary. If something is found to be useless for 99% of the users, it will probably be jettisoned. It is much better to decide such questions calmly and in full knowledge of the facts rather than screaming at the last moment because “We can’t continue to do everything! Something’s gotta give!” and dumping overboard all kinds of things that may be very important to certain constituencies and retaining other parts that may be useless. It happens all the time; certainly it happens in libraries.

In the present case, with MARC21 we already supply jurisdiction. If it is seen as so important, a few lines of code could display the jurisdiction in the 008 automatically and catalogers would not have to do any more work than they have always done. Managers might then decide that it would be worthwhile to pay the costs for programming. If Bibframe doesn’t currently include the 008 information, it should be altered so that it does.

James Weinheimer weinheimer.jim.l@gmail.com
First Thus http://blog.jweinheimer.net
First Thus Facebook Page
https://www.facebook.com/FirstThus
Cooperative Cataloging Rules http://sites.google.com/site/opencatalogingrules/
Cataloging Matters Podcasts http://blog.jweinheimer.net/cataloging-matters-podcasts

FacebookTwitterGoogle+PinterestShare

by James Weinheimer at August 28, 2015 09:06 AM

August 27, 2015

Mod Librarian

5 Things Thursday: SEO, DAM, Findability, IPTC Video Metadata

5 Things Thursday: SEO, DAM, Findability, IPTC Video Metadata

Here are five things:

  1. How to balance website user experience with SEO.
  2. Digital asset management for everyone.
  3. Another DAM Podcast talks to Kate Jordan Gofus about DAM for video assets.
  4. The complete guide to local SEO for multiple locations.
  5. How to make library and special collections items findable through cataloging.

BONUS: Please review and comment on IPTC Video Metadata here.

View On WordPress

August 27, 2015 12:34 PM

First Thus

ACAT Fwd: ACAT Oxford U.P. where?

On 7/28/2015 10:20 AM, Hal Cain wrote:
> There are times I think we may be supposing users will benefit, when they > really don’t notice, and don’t use it for searching or selecting. How much > does inconsistency matter (i.e. we capture all the data presented, and > shunt the pieces into their categories, but when it’s not presented we just > move on…)?

Agreed. But if we want to ask sincerely the question of “what benefits the users” (something very important to do in my opinion) that really opens up a can of worms, and it is something that RDA/FRBR/Bibframe haven’t really addressed: what do the users really need? You can find this out only by asking people.

If we did ask, I think we would find out that the public *would like* lots of things, *wants* other things, *needs* still other things, and would love to have other things but they don’t know they are possible. The number of catalogers does not seem to be going up, so we cannot offer everything to everyone. What would members of the public prefer? Here is an example. We could ask people what would they prefer to have:

1) to have the country placed in the publication information (e.g. Oxford University Press in the US vs UK)

2) to have authorities more useful. i.e. so that people can easily know that if they want items authored by IBM they must search in specific ways (“International Business Machines Corporation” but its subbodies, conferences sponsored by IBM, and so on, may have other forms) and that IBM itself had earlier names

3) that if you want a book or movie or music, you may not have to do much at all. There are zillions of excellent free materials, so if someone wants to watch “My Man Godfrey” with William Powell and Carole Lombard (a great movie that I recently watched), you don’t have to spend any money; you don’t have to drive to a library to borrow a VCR or DVD. You don’t have to sign on with Overdrive or some paid service. You can simply search either https://www.google.it/search?q=my+man+godfrey&num=100&newwindow=1&safe=off&biw=1525&bih=672&tbm=vid&source=lnt&tbs=dur:l or https://archive.org/search.php?query=my%20man%20godfrey%20AND%20mediatype%3Amovies. These copies are completely legal. There are lots of these types of materials.

If the choice were offered to the public in this way, I think option 1 would be seen as much less important than the others.

I agree that we should focus on the needs of the users, but we need to find out what they are and what the public would prioritize them. Then I think we could be assured of creating something that the public wants and needs.

James Weinheimer weinheimer.jim.l@gmail.com
First Thus http://blog.jweinheimer.net
First Thus Facebook Page https://www.facebook.com/FirstThus
Personal Facebook Page https://www.facebook.com/james.weinheimer.35 Google+ https://plus.google.com/u/0/+JamesWeinheimer
Cooperative Cataloging Rules http://sites.google.com/site/opencatalogingrules/
Cataloging Matters Podcasts http://blog.jweinheimer.net/cataloging-matters-podcasts The Library Herald http://libnews.jweinheimer.net/

FacebookTwitterGoogle+PinterestShare

by James Weinheimer at August 27, 2015 12:04 PM

August 26, 2015

025.431: The Dewey blog

Personal bibliographies and biobibliographies

We received an inquiry a while ago from our colleagues at the Deutsche Nationalbibliothek (DNB), asking about the intended treatment for personal bibliographies for persons clearly associated with a specific subject—should they be classed in 016 with other bibliographies, or in 001–999 with the subject? 

In considering the question, we realized that the table of preference at 012-017 did not give a clear answer. Yes, the table of preference lists 016 Bibliographies and catalogs of works on specific subjects above 012 Bibliographies and catalogs of individuals—but the instructions at 012 on classing biobibliographies muddy the waters. Consequently, a see reference has been added at 012 indicating that personal bibliographies of persons associated with a specific subject should be classed with the subject in 016 (that is, in 016, plus notation 001–999 to indicate the subject). 

Given the complexity of the situation here—is the work being classified a personal bibliography or a biobibliography?; is the corresponding person associated with a subject?—a Manual note was added (1) to clarify the distinction between personal bibliographies and biobibliographies and (2) to give instructions for how to class personal bibliographies and biobibliographies, depending on whether the person involved is clearly associated with a subject. The text of the Manual note reads as follows:

012 vs. 016, 001-999

Personal bibliographies and biobibliographies

A personal bibliography is a bibliography of works by or about a person. A biobibliography is a bibliography of works by or about a person, combined with substantial biographical material about the person.

Use 012 for both personal bibliographies and biobibliographies of people who are not clearly associated with a specific subject. Use 016 plus notation 001-999 for personal bibliographies of people associated with a specific subject, e.g., a personal bibliography of a psychologist 016.15. Use 001-999 plus notation T1—092 from Table 1 for biobibliographies of people associated with a specific subject, e.g., a biobibliography of a psychologist 150.92.

Add notation T1—092 from Table 1 if a personal bibliography includes annotated bibliographic entries of works by the person and the annotations constitute description and critical appraisal of the person’s work, e.g., an annotated personal bibliography of a psychologist 016.15092.

Thus, both a personal bibliography and a biobibliography are bibliographies of works by or about a person, but the biobibliography also includes “substantial biographical material.” Both personal bibliographies and biobibliographies of persons who are not clearly associated with a specific subject are classed in the same number, 012 Bibliographies and catalogs of individuals. Treatment of personal bibliographies and biobibliographies differs, however, if the person involved is clearly associated with a subject:  personal bibliographies are classed in 016 Bibliographies and catalogs of works on specific subjects, plus notation 001–999 for the subject, while biobibliographies are classed in the number for the subject, plus notation T1—092 Biography. That is, the essential characteristic of personal bibliographies is that they are bibliographies; the essential characteristic of biobibliographies is that they are biographies.

Several examples should seal the deal.

Our first example is Paul Simon: a bio-bibliography. The work includes a brief biography of this 20th-century American singer and songwriter, followed by an extensive general bibliography, a discography, a composition list, various indexes, etc. We therefore class the work, as accurately reflected by its subtitle, as a biobibliography, that is, in the number for the subject with which Paul Simon is associated, plus notation T1—092. This gives us 782.42164092 Biography of Western popular songs (built from 782.42 Songs, plus notation 164 from 781.64 Western popular music, following the directions at 782.1–782.4, plus notation T1—092 Biography).

The second example is A Kafka bibliography, 1908-1976. Containing no substantial biographical material, the work is a personal bibliography. Kafka is known as an early 20th century German-language novelist. This work is then classed in 016.833912 Bibliographies of German fiction, 1900-1945 (built with 016 Bibliographies and catalogs of works on specific subjects, plus—following the instruction at 016 to add for the specific subject—notation 833 German fiction, plus notation 912, 1900-1945 from the period table under 831-838 Subdivisions for specific forms of German literature, following the instructions at Table 3A Subdivisions for Works by or about Individual Authors).

What if the person who is the focus of the biobibliography or personal bibliography is associated with more than one subject? Can he or she still be clearly associated with a specific subject? Yes, even though someone may be known for his or her work in several fields, that person may still be clearly associated with only one of those subjects. Such is the case with Winston Churchill, who was a military officer, a historian, and a writer (having won the Nobel Prize in literature), but who is best known as a statesman. With that background in mind, our third example is A bibliography of the works of Sir Winston Churchill. Inasmuch as the work includes no substantial biographical material, the work is classed as a personal bibliography in 016.941084 Bibliographies of history of Great Britain, 1936–1945 (built with 016 Bibliographies and catalogs of works on specific subjects, plus notation 941.084 History of Great Britain, 1936–1945, following the add instruction at 016). The time period 1936-1945 is chosen because Churchill’s prime ministership of the United Kingdom from 1940 to 1945 represents the foremost of his many contributions to British history.

If someone known for work in several fields can still be clearly associated with a single field, when should 012 Bibliographies and catalogs of individuals be used? In theory, 012 is appropriate for use with a person who is not clearly associated with any field. But it is a tad difficult to imagine a personal bibliography or biobibliography being prepared for someone who is not associated with any field at all. This class would also be appropriate for someone associated with multiple fields to more-or-less the same extent. While this possibility is perhaps more easily imagined, in practice, we do not anticipate that 012 should get much use.

by Rebecca at August 26, 2015 04:28 PM

August 24, 2015

Terry's Worklog

MarcEdit Validate Headings: Part 2

Last week, I posted an update that included the early implementation of the Validate Headings tool.  After a week of testing, feedback and refinement, I think that the tool now functions in a way that will be helpful to users.  So, let me describe how the tool works and what you can expect when the tool is run.

Background:

The Validate Headings tool was added as a new report to the MarcEditor to enable users to take a set of records and get back a report detailing how many records had corresponding Library of Congress authority headings.  The tool was designed to validate data in the 1xx, 6xx, and 7xx fields.  The tool has been set to only query headings and subjects that utilize the LC authorities.  At some point, I’ll look to expand to other vocabularies.

How does it work

Presently, this tool must be run from within the MarcEditor – though at some point in the future, I’ll extract this out of the MarcEditor, and provide a stand alone function and a integration with the command line tool.  Right now, to use the function, you open the MarcEditor and select the Reports/Validate Headings menu.

image

Selecting this option will open the following window:

image

Options – you’ll notice 3 options available to you.  The tool allows users to decide what values that they would like to have validated.  They can select names (1xx, 600,10,11, 7xx) or subjects (6xx).  Please note, when you select names, the tool does look up the 600,610,611 as part of the process because the validation of these subjects occurs within the name authority file.  The last option deals with the local cache.  As MarcEdit pulls data from the Library of Congress – it caches the data that it receives so that it can use it on subsequent headings validation checked.  The cache will be used until it expires in 30 days…however, a user at any time can check this option and MarcEdit will delete the existing cache and rebuild it during the current data run. 

Couple things you’ll also note on this screen. There is an extract button and it’s not enabled.  Once the Validate report is run, this button will become enabled if there are any records that are identified as having headings that could not be validated against the service. 

Running the Tool:

Couple notes about running the tool.  When you run the tool, what you are asking MarcEdit to do is process your data file and query the Library of Congress for information related to the authorized terms in your records.  As part of this process, MarcEdit sends a lot of data back and forth to the Library of Congress utilizing the http://id.loc.gov service.  The tool attempts to use a light touch, only pulling down headings for a specific request – but do realize that a lot of data requests are generated through this function.  You can estimate approximately how many requests might be made on a specific file by using the following formula: (number of records x 2)  + (number of records), assuming that most records will have 1 name to authorize and 1 subjects per record.  So a file with 2500 records would generate ~7500 requests to the Library of Congress.  Now, this is just a guess, in my tests, I’ve had some sets generate as many as 12,000 requests for 2500 records and as few as 4000 requests for 2500 records – but 7500 tended to be within 500 requests in most test files.

So why do we care?  Well, this report has the potential to generate a lot of requests to the Library of Congress’s identifier service – and while I’ve been told that there shouldn’t be any issues with this – I think that question won’t really be known until people start using it.  At the same time – this function won’t come as a surprise to the folks at the Library of Congress – as we’ve spoken a number of times during the development.  At this point, we are all kind of waiting to see how popular this function might be, and if MarcEdit usage will create any noticeable up-tick in the service usage.

Validation Results:

When you run the validation tool, the program will go through each record, making the necessary validation requests of the LC ID service.  When the service has completed, the user will receive a report with the following information:

Validation Results:
Process completed in: 121.546001431667 minutes. 
Average Response Time from LC: 0.847667984420415
Total Records: 2500
Records with Invalid Headings: 1464
**************************************************************
1xx Headings Found: 1403
6xx Headings Found: 4106
7xx Headings Found: 1434
**************************************************************
1xx Headings Not Found: 521
6xx Headings Not Found: 1538
7xx Headings Not Found: 624
**************************************************************
1xx Variants Found: 6
6xx Variants Found: 1
7xx Variants Found: 3
**************************************************************
Total Unique Headings Queried: 8604
Found in Local Cache: 1001
***************************************************************

This represents the header of the report.  I wanted users to be able to quickly, at a glance, see what the Validator determined during the course of the process.  From here, I can see a couple of things:

  1. The tool queried a total of 2500 records
  2. Of those 2500 records, 1464 of those records had a least one heading that was not found
  3. Within those 2500 records, 8604 unique headers were queried
  4. Within those 2500 records, there were 1001 duplicate headings across records (these were not duplicate headings within the same record, but for example, multiple records with the same author, subject, etc.)
  5. We can see how many Headings were found by the LC ID service within the 1xx, 6xx, and 7xx blocks
  6. Likewise, we can see how many headings were not found by the LC ID service within the 1xx, 6xx, and 7xx blocks.
  7. We can see number of Variants as well.  Variants are defined as names that resolved, but that the preferred name returned by the Library of Congress didn’t match what was in the record.  Variants will be extracted as part of the records that need further evaluation.

After this summary of information, the Validation report returns information related to the record # (record number count starts at zero) and the headings that were not found.  For example:

Record #0
Heading not found for: Performing arts--Management--Congresses
Heading not found for: Crawford, Robert W

Record #5
Heading not found for: Social service--Teamwork--Great Britain

Record #7
Heading not found for: Morris, A. J

Record #9
Heading not found for: Sambul, Nathan J

Record #13
Heading not found for: Opera--Social aspects--United States
Heading not found for: Opera--Production and direction--United States

The current report format includes specific information about the heading that was not found.  If the value is a variant, it shows up in the report as:

Record #612
Term in Record: bible.--criticism, interpretation, etc., jewish
LC Preferred Term: Bible. Old Testament--Criticism, interpretation, etc., Jewish
URL: http://id.loc.gov/authorities/subjects/sh85013771
Heading not found for: Bible.--Criticism, interpretation, etc

Here you see – the report returns the record number, the normalized form of the term as queried, the current LC Preferred term, and the URL to the term that’s been found.

The report can be copied and placed into a different program for viewing or can be printed (see buttons).

image

To extract the records that need work, minimize or close this window and go back to the Validate Headings Window.  You will now see two new options:

image

First, you’ll see that the Extract button has been enabled.  Click this button, and all the records that have been identified as having headings in need of work will be exported to the MarcEditor.  You can now save this file and work on the records. 

Second, you’ll see the new link – save delimited.  Click on this link, and the program will save a tab delimited copy of the validation report.  The report will have the following format:

Record ID [tab] 1xx [tab] 6xx [tab] 7xx [new line]

Each column will be delimited by a colon, so if two 1xx headings appear in a record, the current process would create a single column, but with the headings separated by a colon like: heading 1:heading 2. 

Future Work:

This function required making a number of improvements to the linked data components – and because of that, the linking tool should work better and faster now.  Additionally, because of the variant work I’ve done, I’ll soon be adding code that will give the user the option to update headings for Variants as is report or the linking tool is running – and I think that is pretty cool.  If you have other ideas or find that this is missing a key piece of functionality – let me know.

–tr

by reeset at August 24, 2015 02:16 AM

August 23, 2015

Resource Description & Access (RDA)

RDA LC-PCC PS Revision

Resource Description and Access RDA

RDA Toolkit Update, August 11, 2015 - Changes in Resource Description and Access (RDA) and Library of Congress - Program for Cooperative Cataloging Policy Statements (LC-PCC PS) and RDA Toolkit

TOPIC 1: Changes in RDA Content
TOPIC 2: Change in Content in LC-PCC PSs
TOPIC 3: Functional Changes in the RDA Toolkit

TOPIC 1: Changes in RDA Content : Fast Track changes

The PDF file mentioned in the URL below from RDA JSC site identifies the "Fast Track" changes to RDA that will be included in this release (6JSC-Sec-16.pdf); Fast Track changes are not added to the RDA Update History.  While you are encouraged to peruse the changes, there are no significant changes.

TOPIC 2: Change in Content in LC-PCC PSs

A summary of LC-PCC PS updates incorporated in this release is available at August 11, 2015 release of the RDA Toolkit.  Catalogers should review the following policy statements:

1.8.2, First Alternative:  revised to include an exception for Chinese, Japanese, Korean, Perso-Arabic, Cyrillic, and Greek catalogers to substitute Western-style arabic numerals when numbers are found in non-Latin scripts.  For Hebrew script catalogers, also note changes to the options at 2.6.3.3 and similar instructions for adding Gregorian date when a date in the Hebrew script is being recorded.

1.8.2, Second Alternative: the LC practice for supplying equivalent numbers has been revised; LC catalogers may now supply such equivalents (e.g., a date in arabic numerals when roman numerals are on the resource) if considered important.

2.12.1.2, 2.12.9.2: At the request of the PCC Series Policy Task Group, the instructions for sources of series statements and series numbering have been revised with respect to information transcribed from “sources within the resource”.

6.27.1.9, 6.28.1.2, 6.28.1.10:  Information on authorized access points for librettos has been revised in consultation with the Music Library Association.

11.3.2.3, 11.13.1.8.1:  A new policy statement has been developed for those cases when it is not feasible to record *all* locations of a conference, etc. (more common for certain types of sporting events).  The statement allows for recording an applicable larger place (or places), or a single place primarily associated with the conference, etc., (e.g., a host city).

TOPIC 3: Functional Changes in the RDA Toolkit

There are no functional changes in the RDA Toolkit in this release.
The next planned release of the RDA Toolkit will be in October 2015.

Source: Library of Congress

Thanks all for your love, suggestions, testimonials, likes, +1, tweets and shares ....

See also related posts in following RDA Blog Categories (Labels):

by Salman Haider (noreply@blogger.com) at August 23, 2015 05:48 AM

August 21, 2015

Resource Description & Access (RDA)

LCSH - Subject Headings Manual (SHM) H 202 and H 203 Revised

Note: Subject Headings Manual (SHM) provides guidelines to use Library of Congress Subject Headings (LCSH). The manual was originally conceived as an in-house procedure manual addressed to cataloging staff at the Library of Congress. From the very beginning, however, the manual included not only procedures and practices to be followed by LC catalogers but also substantive explanations of subject cataloging policy. Other libraries who wish to catalog in the same manner as the Library of Congress as well as faculty at schools of library science who wish to teach Library of Congress subject cataloging policies to their students should follow the guidelines of the Subject Headings Manual (SHM).
Librarianship Studies & Information Technology Blog will be more focused on the techniques of Library of Congress Classification (LCC) and Library of Congress Subject Headings (LCSH) by use of Classification & Shelflisting Manual (CSM) and Subject Headings Manual (SHM) and Classification Web tool of Library of Congress, and Dewey Decimal Classification (DDC). Follow Librarianship Studies & Information Technology in Social Media blog to be updated of new items and start/comment on the discussions in the Google+ Community Librarianship Studies & Information Technology and Facebook Group Librarianship Studies & Information Technology.

by Salman Haider (noreply@blogger.com) at August 21, 2015 08:13 AM

August 20, 2015

Mod Librarian

5 Things Thursday: DAM, Metadata, Controlled Vocabulary

5 Things Thursday: DAM, Metadata, Controlled Vocabulary

Here are 5 things of interest:

  1. Amaze and delight with insights from this Digital Asset Management Fact Sheet.
  2. Avoiding data modelling pitfalls with your graph database project.
  3. Embracing metadata change by Joshua Lynn.
  4. The New York Times article tagging robot.
  5. Implementing a controlled vocabulary in Adobe Lightroom.

View On WordPress

August 20, 2015 12:15 PM

August 17, 2015

First Thus

ACAT Adding an unathorized author heading

On 7/17/2015 6:20 PM, Joanna Sturgeon wrote:
> The adult services Librarian wants me to change the authoritative “Lindsay, > Jeffry P.” to “Lindsay, Jeff.” Is it acceptable to place the unauthorized > version in a 700 entry? Is there some better way to deal with this?

I am personally not into mystery novels but my wife is. It seems as if this is a rather complicated case. According to the NAF “Lindsay, Jeffrey P.” is the heading for the author of the Dexter novels (which was made into a TV series). From his Wikipedia page (https://en.wikipedia.org/wiki/Jeff_Lindsay_(writer)), we discover that “Jeff Lindsay” is the pseudonym of Jeffry P. Freundlich. We also discover something very important: “Many of his earlier published works include his wife Hilary Hemingway as a co-author.”

After this, I looked up usage for the Dexter novels in OCLC http://bit.ly/1Kd8jBG (this search limits to English language books) and found two forms: “Jeff Lindsay” and “Jeffrey Lindsay” with the second form used only once. “Jeffrey P. Lindsay” appears only on his novel “Tropical depression” (not part of the Dexter series) and is used consistently in other books he wrote with Hilary Hemingway.

Here is the authority record.

100 1_ |a Lindsay, Jeffry P.
370 __ |e Cape Coral, Fla.
374 __ |a Screenwriters |a Authors |a Novelists |2 lcsh
400 1_ |a Lindsay, Jeff |q (Jeffry P.)
400 1_ |a לינדסי, ג׳ף
667 __ |a Machine-derived non-Latin script reference project. 667 __ |a Non-Latin script reference not evaluated.
670 __ |a Tropical depression, c1994: |b t.p. (Jeffry P. Lindsay) jkt. (Screenwriter, director, and author of over 14 plays; makes his home in Cape Coral, Fla.) 670 __ |a His Dearly devoted Dexter, c2005: |b CIP t.p. (Linday, Jeff)
670 __ |a Dexter’s final cut, c2013: |b (Jeff Lindsay is the NY Times bestselling author and creator of the Dexter novels. His novels are the inspiration for the Showtime and CBS series Dexter.) 675 __ |a Halliwell’s filmgoer’s and video viewer’s companion, c1993

Based on all of this, the record needs to be updated. “Jeff Lindsay” is the clear usage of the author of the Dexter series. “Jeffry P. Lindsay” is the author of some plays, and books on other topics. Plus his real name “Jeffrey P. Freundlich” needs to be added, but I don’t know if he has written anything under this. Apparently not, based on an OCLC search.

Since we are dealing with pseudonyms, there could be a real case for separate bibliographic identities, where “Jeff Lindsay” and “Jeffrey P. Lindsay” require different headings. I would argue that from an examination of the catalog records, Mr. Freundlich consistently uses the different names as different identities.

These are some of the problems that pop-up when people ask for changes in the catalog! In this case however, I think the changes would be justified.

James Weinheimer weinheimer.jim.l@gmail.com
First Thus http://blog.jweinheimer.net
First Thus Facebook Page https://www.facebook.com/FirstThus
Personal Facebook Page https://www.facebook.com/james.weinheimer.35 Google+ https://plus.google.com/u/0/+JamesWeinheimer
Cooperative Cataloging Rules http://sites.google.com/site/opencatalogingrules/
Cataloging Matters Podcasts http://blog.jweinheimer.net/cataloging-matters-podcasts The Library Herald http://libnews.jweinheimer.net/

FacebookTwitterGoogle+PinterestShare

by James Weinheimer at August 17, 2015 02:00 PM

August 16, 2015

OCLC Cataloging and Metadata News

IFLA 2015

Join your colleagues at OCLC information sessions during IFLA 2015

August 16, 2015 12:00 PM

Resource Description & Access (RDA)

RDA Bibliography

RDA Bibliography




Articles
Books

    Presentations

    Videos

    A New Video Series from The Library of Congress: “Conversations About RDA” : The Library of Congress has just released online, “Conversations About RDA”, a new series of five training videos providing, “tips and strategies for working with the RDA.” The videos were recorded on May 20, 2015.


    Format of bibliographic description here is similar to description on Bibliography Page of RDA Bibliography: Title. Author/Editor/Compiler. Year. Publisher/Journal. Pages/Volume/Issue/Slides/Minutes. Place.

    This is a compilation from Google Alerts and other sources and searches. Check complete compilation so far in the Bibliography Page of RDA Bibliography containing Articles, Books, Presentations, Thesis, and Videos on Resource Description and Access (RDA) in a spreadsheet view as shown below:

    RDA Bibliography

    [RDA Bibliography is a partner-blog of RDA Blog]

    Please suggest new resources to be included in the RDA Bibliography through the form available on About RDA Bibliography Page. 

    Please provide us your valuable feedback in the RDA Blog Guest Book about RDA Bibliography. Select "RDA Bibliography" from the drop-down option in the "Choose a Blog" part of the form.



    Thanks all for your love, suggestions, testimonials, likes, +1, tweets and shares ....

    See also related posts in following RDA Blog Categories (Labels):

    by Salman Haider (noreply@blogger.com) at August 16, 2015 05:46 AM

    August 15, 2015

    First Thus

    ACAT Bibframe

    On 7/15/2015 1:50 PM, McDonald, Stephen wrote:
    > Bibframe has a relationship to RDA similar to the relationship > between MARC and AACR2. Bibframe and MARC are methods for storing > metadata. RDA and AACR2 are standards for deciding what metadata to > store. RDA replaced AACR2; Bibframe is intended to replace MARC. >
    > Bibframe is compatible with RDA, and is intended to serve as a > storage framework for RDA with linked data capabilities. The plan is > that Bibframe will be able to convert to and from MARC, XML, and > other metadata standards, allowing existing metadata (from libraries, > publishers, and web services) to be used as a basis for future > cataloging, and allowing library data to be used in non-library > environments.

    Bibframe and MARC are designed not for storage but for communication. Just as most library catalogs that use MARC records store them in relational database formats, Bibframe will allow catalog information to be communicated in RDF triples. These must be stored in a database that can store those RDF triples and can respond to special queries. There are many, many options for achieving that. Some options may use a RDBMS, or they may be completely different kinds of databases. There will undoubtedly be many more in the future. In addition, each of these databases can/will be structured in completely different ways, depending on the local needs. (Perhaps the best and “simplest” explanation I have seen is at http://www.dataversity.net/introduction-to-triplestores/)

    So today, when you download a record using Z39.50 into a local library catalog, if you have a relational database (which is almost everybody), the system actually reworks everything that is in the Z39.50 record and places each bit of information into the correct table and cell. I know the most about the Koha catalog structure because it is open. You can see their table structure at http://schema.koha-community.org/. By going through the list, you can see that there are 165 total tables. Also, that the title (245a) goes into the table “biblio/title” but the publisher statement goes into another table “biblioitems/publishercode” and the biblioitems table also includes a complete version of the ISO2709 version of the record “biblioitems/marc” and also has another complete version in MARCXML “biblioitems/marcxml”. Of course, the library catalog you downloaded the original record from almost certainly has internal structures that are completely different from what is in Koha.

    So, storing information can be done in a whole variety of ways, each determined by local needs. What is important is that the information can be queried in consistent and reliable ways, and then shared (communicated) in equally consistent and reliable ways. ISO2709 allowed all of that, but RDF uses different methods. Bibframe is attempting to come up with ways to do the queries and communicating using those RDF methods. There is nothing wrong with that, but it also doesn’t mean that internal workings of our library catalogs will have to take on the structures of Bibframe and be completely retooled–quite the opposite. Probably a few extras tweaks may be needed here and there, but nothing major.

    When looked at only in this way, RDF offers nothing substantially new from earlier methods except a different structure. What is really new and exciting with RDF is the inclusion of the URI link and how developers can use the information found at the end of those links.

    How it will all work in a library, and more importantly: whether the public will like it or not, is still unclear. There is certainly a lot of promise, but our “web technology junkyards” are overflowing with projects that were “promising”. So far, the Semantic Web has promised a lot but given very little that has been impressive. The much-touted Google Knowledge Graphs are pretty much useless. This doesn’t mean that the Semantic Web cannot work, but its successes so far have been limited.

    Although I don’t agree with everything in this controversial article, it does at least provide a different viewpoint from what we normally get. https://gigaom.com/2013/11/03/three-reasons-why-the-semantic-web-has-failed/

    James Weinheimer weinheimer.jim.l@gmail.com
    First Thus http://blog.jweinheimer.net
    First Thus Facebook Page https://www.facebook.com/FirstThus
    Personal Facebook Page https://www.facebook.com/james.weinheimer.35 Google+ https://plus.google.com/u/0/+JamesWeinheimer
    Cooperative Cataloging Rules http://sites.google.com/site/opencatalogingrules/
    Cataloging Matters Podcasts http://blog.jweinheimer.net/cataloging-matters-podcasts The Library Herald http://libnews.jweinheimer.net/

    FacebookTwitterGoogle+PinterestShare

    by James Weinheimer at August 15, 2015 10:12 AM