Planet Cataloging

September 28, 2016

OCLC Next

A five-step cycle for strong library programs

five-steps

Strong libraries are the heart of thriving communities. At OCLC, we don’t just believe that, it’s at the heart of our WebJunction program. It also informs the model we see forward-moving libraries using to drive an ongoing cycle of success.

The cycle of your work is our work

What is a “strong library?” It can actually be defined by this cycle:

  1. five-steps-blogIt starts with understanding and identifying the needs of your community.
  2. Next, you have to build the capacity to serve those needs, which, yes, involves making choices and defining priorities.
  3. You then design and deliver the services that meet the needs.
  4. Those services aren’t successful if they don’t reach your target audience, so outreach and communication is essential.
  5. Finally, you evaluate the effectiveness of your services and their impact on local priorities, such as lifelong learning, health and economic success.

Communities shift over time and external factors—whether crisis or opportunity—can rise up unannounced, so this is a cyclical process, not a one-off exercise.


Library success is a cyclical process, not a one-off exercise.
Click To Tweet


The same model applies to us

You can see this same cycle in the work WebJunction has done since its 2002 birth out of a grant from the Bill & Melinda Gates Foundation to OCLC. We launched in order to “provide training, content and a supportive community for the staff of libraries that manage computer and internet technology for public use.” In more recent years, we’ve applied the same cycle to help design programs that connect what people do inside the library to their lives outside the library. In essential areas, such as lifelong learning, health and employment, we were able to “evaluate service outcomes” from library programs around the world on behalf of others. In this way, successful, strong libraries become models and their results replicable.

This cycle applies to all our work across the board at WebJunction. We seek to understand library needs, build capacity to meet those needs, design and deliver services to meet those needs, reach the libraries that have the needs and evaluate the outcomes. To do this:

  • We stay in touch with our library audience on Facebook, Twitter and local media, through polls and surveys, chats and calls.
  • We scan for knowledge across both the library field and outside resources.
  • We treat learning seriously and report back with information like how to make the most out of webinars, the value of self-paced courses and how blended learning combines the best of online and offline content.
  • We connect you to other library staff through live and online learning programs.
  • We partner with funders and library organizations to harness the collective impact of public libraries, including projects on the economic crash, consumer health information, “supercharged” storytimes for young children and re-envisioning library spaces to support active learning.

And, just like public libraries do for their communities, we offer these resources for free. Because you keep telling us that you love that WebJunction is free to use and is accessible from anywhere, anytime, by anyone.

What’s new…what’s next

Over these 14 years, WebJunction has expanded our focus to cover more of what works and what our members have told us is important to them. Our staff—passionate about the transformative power of libraries and deeply immersed in the opportunities of scalable learning—is able to use established methods within a flexible platform in order to respond to important changes in the landscape.

As you may know, the Gates Foundation is winding down its Global Libraries program, which means we will be looking to new and renewed partnerships to fuel our mission into the future. We will remain hardcore about designing and delivering learning content and programs that provide the inspiration, skills and confidence to keep libraries looking and moving forward.

Step 1: We want to hear from you—again and again—about how we can best support your library’s learning needs and your community’s priorities. You can reach me directly at streamss@oclc.org with questions for us, examples of great work your library is doing or ideas on how we can improve WebJunction programming across the board.

Question…What cycles of learning has your library gone through recently? Let us know on Twitter with the hashtag #OCLCnext.

The post A five-step cycle for strong library programs appeared first on OCLC Next.

by Sharon Streams at September 28, 2016 04:00 PM

September 27, 2016

Problem Cataloger

Terry's Worklog

Note on Automatic Updates

Please note, MarcEdit’s Automated update tool will notify you of the update, but you will need to manually download the file from: http://marcedit.reeset.net/downloads.  My web host, Bluehost, has made a change to their server configuration that makes no sense, but ultimately, dumps traffic sent from non-web browsers (connections without a user-agent).  Right now, users will get this message when they attempt to download using the automatic update:

update_bluehost_error

I can accommodate the requirements that they have setup now, but it will mean that users will need to do manual downloads for the current update posted 9/27/2016 and the subsequent update — which I’ll try to get out tonight or tomorrow.

I apologize for the inconvenience, but after spending 8 hours yesterday and today wrangling with them and trying to explain what this breaks (because I have some personal tools that this change affects), I’m just not getting anywhere.  Maybe something will magically change, maybe not — but for now I’ll be rewriting the update process to try and protect from these kinds of unannounced changes in the future.

So again, you’ll want to download MarcEdit from http://marcedit.reeset.net/downloads since the automatic update download connection is currently being dumped by my web host.

by reeset at September 27, 2016 05:45 PM

MarcEdit Update (Windows/Linux)

I’ve posted a new set of updates.  The initial set is for Windows and Linux.  I’ll be posting Mac updates later this week.  Here’s the list of changes:

  • Behavior Change — Windows/Linux: Intellisense turned off by default (this is the box that shows up when you start to type a diacritic) for new installs. As more folks use UTF8, this option makes less sense. Will likely make plans to remove it within the next year.
  • Enhancement: Select Extracted Records: UI Updates to the import process.
  • Enhancement: Select Extracted Records: Updates to the batch file query.
  • Behavior Change: Z39.50 Client: Override added to the Z39.50 client to enable the Z39.50 client to override search limits. Beware, overriding this option is potentially problematic.
  • Update: Linked Data Rules File: Rules file updated to add databases for the Japanese Diet library, 880 field processing, and the German National Library.
  • Enhancement: Task Manager: Added a new macro/delimiter. {current_file} will print the current filename if set.
  • Bug Fix: RDA Helper – Abbreviation expansion is failing to process specific fields when config file is changed.
  • Bug Fix: MSXML Engine – In an effort to allow the xsl:strip-whitespace element, I broke this process. The work around has been to use the saxon.net engine. However, I’ll correct this. Information on how you emulate the xsl:strip-whitespace element will be here: http://marcedit.reeset.net/xslt-processing-xslstrip-whitespace-issues
  • Bug Fix: Task Manager Editing – when adding the RDA Helper to a new task, it asks for file paths. This was due to some enhanced validation around files. This didn’t impact any existing tasks.
  • Bug Fix: UI changes – I’m setting default sizes for a number of forms for usability
  • Bug Fix/Enhancement: Open Refine Import – OpenRefine’s release candidate changes the tab delimited output slightly. I’ve added some code to accommodate the changes.
  • Enhancement: MarcEdit Linked Data Platform – adding enhancements to make it easier to add collections and update the rules file
  • Enhancement: MarcEdit Linked Data Platform – updating the rules file to include a number of new endpoints
  • Enhancement: MarcEdit Linked Data Platform – adding new functionality to the rules file to support the recoding of the rules file for UNIMARC.
  • Enhancement: Edit Shortcut – Adding a new edit short cut to find fields missing words
  • Enhancement: XML Platform – making it clearer that you can use either XQuery or XSLT for transformations into MARCXML
  • Enhancement: OAI Harvester – code underneath to update user agent and accommodate content-type requirements on some servers.
  • Enhancement: OCLC API Integration – added code to integrate with the validation. Not sure this makes its way into the interface yet, but code will be there.
  • Enhancement: Saxon.NET version bump
  • Enhancement: SPARQL Explorer – Updating the sparql engine to give me more access to low level data manipulation
  • Enhancement: Autosave option when working in the MarcEditor. Saves every 5 minutes. Will protect against crashes. data

Downloads are available from the downloads page (http://marcedit.reeset.net/downloads):

–tr

by reeset at September 27, 2016 04:10 AM

September 26, 2016

TSLL TechScans

Getting to Know TS Law Librarians: Travis Spence


1. Introduce yourself (name & position).
Travis Spence. Head of Technical Services at the Cracchiolo Library of the University of Arizona James E. Rogers College of Law

2. Does your job title actually describe what you do? Why/why not?
I think it does, however, the description  of what is “Technical Services” changes as the profession evolves. I still work in some of the traditional roles and methods of acquisitions and cataloging, but also need to stay aware of how technology is changing what I do. Adaptability is key, as Technical Services takes on more data management and online access responsibilities. Technical Services can mean a lot of different things now. 

3. What are you reading right now?
For nonfiction, I’ve been reading, Letting Go of Legacy Services: Library Cases Studies, edited by Mary Evangeliste and Katherine Furlong. It has a lot of thought-provoking stories about libraries that have given up practices that were once indispensable but can now drain resources and prevent libraries from adapting to users' new needs.

For fiction, I’ve just started Farthing, by Jo Walton, a murder mystery set in an alternate history where the UK backed out of World War II after making peace with Germany.If you could work in any library (either a type of library or a specific one), what would it be? Why?

4. If you could work in any library (either a type of library or a specific one), what would it be? Why?
I think I'd enjoy managing a completely online library someday. I'm intrigued by the challenges of providing users with seamless access to curated resources and research services, no matter where they are or when they need information.

by noreply@blogger.com (Lauren Seney) at September 26, 2016 04:13 PM

Coyle's InFormation

2 Mysteries Solved!

One of the disadvantages of a long tradition is that the reasons behind certain practices can be lost over time. This is definitely the case with many practices in libraries, and in particular in practices affecting the library catalog. In U.S. libraries we tend to date our cataloging practices back to Panizzi, in the 1830's, but I suspect that he was already building on practices that preceded him.

A particular problem with this loss of history is that without the information about why a certain practice was chosen it becomes difficult to know if or when you can change the practice. This is compounded in libraries by the existence of entries in our catalogs that were created long before us and by colleagues whom we can no longer consult.


I was recently reading through volume one of the American Library Journal from the year 1876-1877. The American Library Association had been founded in 1876 and had its first meeting in Philadelphia in September, 1876. U.S. librarianship finally had a focal point for professional development. From the initial conference there were a number of ALA committees working on problems of interest to the library community. A Committee on Cooperative Cataloguing, led by Melvil Dewey, (who had not yet been able to remove the "u" from "cataloguing") was proposing that cataloging of books be done once, centrally, and shared, at a modest cost, with other libraries that purchased the same book. This was realized in 1902 when the Library of Congress began selling printed card sets. We still have cooperative cataloging, 140 years later, and it has had a profound effect on the ability of American libraries to reduce the cost of catalog creation.

Other practices were set in motion in 1876-1877, and two of these can be found in that inaugural volume. They are also practices whose rationales have not been obvious to me, so I was very glad to solve these mysteries.

Title case

Some time ago I asked on Autocat, out of curiosity, why libraries use sentence case for titles. No one who replied had more than a speculative answer. In 1877, however, Charles Ammi Cutter reports on The Use of Capitals in library cataloging and defines a set of rules that can be followed. His main impetus is "readability;" that "a profusion of capitals confuses rather than assists the eye...." (He also mentions that this is not a problem with the Bodleian library catalog, as that is written in Latin.)

Cutter would have preferred that capitals be confined to proper names, eschewing their use for titles of honor (Rev., Mrs., Earl) and initialisms (A.D). However, he said that these uses were so common that he didn't expect to see them changed, and so he conceded them.

All in all, I think you will find his rules quite compelling. I haven't looked at how they compare to any such rules in RDA. So much still to do!

Centimeters

I have often pointed out, although it would be obvious to anyone who has the time to question the practice, that books are measured in centimeters in Anglo-American catalogs, although there are few cultures as insistent on measuring in inches and feet than those. It is particularly un-helpful that books in libraries are cataloged with a height measurement in centimeters while the shelves that they are destined for are measured in inches. It is true that the measurement forms part of the description of the book, but at least one use of that is to determine on which shelves those books can be placed. (Note that in some storage facilities, book shelves are more variable in height than in general library collections and the size determination allows for more compact storage.) If I were to shout out to you "37 centimeters" you would probably be hard-pressed to reply quickly with the same measurement in inches. So why do we use centimeters?

The newly-formed American Library Association had a Committee on Sizes. This committee had been given the task of developing a set of standard size designations for books. The "size question" had to do with the then current practice to list sizes as folio, quarto, etc. Apparently the rise of modern paper making and printing meant that those were no longer the actual sizes of books. In the article by Charles Evans (pp. 56-61) he argued that actual measurements of the books, in inches, should replace the previous list of standard sizes. However, later, the use of inches was questioned. At the ALA meeting, W.F. Poole (of Poole's indexes) made the following statement (p. 109):
"The expression of measure in inches, and vulgar fractions of an inch, has many disadvantages, while the metric decimal system is simple, and doubtless will soon come into general use."
The committee agreed with this approach, and concluded:
"The committee have also reconsidered the expediency of adopting the centimeter as a unit, in accordance with the vote at Philadelphia, querying whether it were really best to substitute this for the familiar inch. They find on investigation that even the opponents of the metric system acknowledge that it is soon to come into general use in this country; that it is already adopted by nearly every other country of importance except England; that it is in itself a unit better adapted to our wants than the inch, which is too large for the measurement of books." (p. 180)

The members of the committee were James L. Whitney, Charles A. Cutter, and Melvil Dewey, the latter having formed the American Metric Bureau in July of 1876, both a kind of lobbying organization and a sales point for metric measures. My guess is that the "investigation" was a chat amongst themselves, and that Dewey was unmovable when it came to using metric measures, although he appears not to have been alone in that. I do love the fact that the inch is "too large," and that its fractions (1/16, etc.) are "vulgar."

Dewey and cohort obviously weren't around when compact discs came on the scene, because those are measured in inches ("1 sound disc : digital ; 4 3/4 in"). However, maps get the metric treatment: "1 map : col. ; 67 x 53 cm folded to 23 x 10 cm". Somewhere there is a record of these decisions, and I hope to come across them.

It would have been ideal if the U.S. had gone metric when Dewey encouraged that move. I suspect that our residual umbilical chord linking us to England is what scuppered that. Yet it is a wonder that we still use those too large, vulgar measurements. Dewey would be very disappointed to learn this.



So there it is, two of the great mysteries solved in the record of the very first year of the American library profession. Here are the readings; I created separate PDFs for the two most relevant sections:

American Library Journal, volume 1, 1876-1877 (from the Internet Archive)
Cutter, Charles A. The use of capitals. American Library Journal, v.1, n. 4-5, 1877. pp. 162-166
The Committee on Sizes of Books, American Library Journal, v.1, n. 4-5, 1877, pages 178-181

Also note that beginning on page 92 there is a near verbatim account of every meeting at the first American Library Association conference in Philadelphia, September, 1876. So verbatim that it includes the mention of who went out for a smoke and missed a key vote. And the advertisements! Give it a look.

by Karen Coyle (noreply@blogger.com) at September 26, 2016 10:57 AM

September 23, 2016

Resource Description & Access (RDA)

RDA Core Elements

Contents:
  • Definition of RDA Core Elements
  • Types RDA Core Elements
  • Examples of RDA Core Elements

Core Elements  Core elements in Resource Description & Access (RDA) are minimum elements required for describing resources. Core elements are a new feature of RDA which allowed for certain metadata elements to be identified as “required” in the cataloging process. The assignment of core status is based on attributes mandatory for a national level record, as documented in the FRBR/FRAD modules. At a minimum, a bibliographic description should include all the required core elements that are applicable. Core-ness is identified at the element level. Some elements are always core (if applicable and the information is available); some are core only in certain situations. Core elements are identified in two ways within RDA. The first is that all core elements are discussed in general, and listed as a group, in the sub-instructions of "RDA 0.6: Core Elements". In the separate chapters, the core elements are also identified individually by the label “CORE ELEMENT” at the beginning of the instructions for each element. They are clearly labeled in light blue at each core instruction in RDA Toolkit.  If the status of an element as core depends upon the situation, an explanation appears after the “Core element” label.

See, for example, this label for the core element for the title.
        2.3. Title
                CORE ELEMENT
             The title proper is a core element. Other titles are optional.

The Joint Steering Committee (JSC) for the development of RDA decided it would be preferable to designate certain elements as “core” rather than designating all elements as either “required” or “optional.” Decisions on core elements were made in the context of the FRBR and FRAD user tasks.
AACR2 provided three levels of bibliographic description. The first level, also known as minimal-level cataloging, contains, at least, the elements that basically identify the resource without providing and detailed description. The second level, also known as standard-level cataloging, provides all applicable elements to uniquely all copies for a manifestation. The third level represents full description and contains all elements provided in the rules that are applicable to the item being described. RDA does not define levels of description, instead, it identifies a number of elements as core elements. Core elements in RDA are similar to AACR2 minimal-level cataloging bibliographic description.

RDA Core Elements comprises elements that fulfill the user tasks of find, identify, and select. Only one instance of a core element is required. Subsequent instances are optional. For example, for the core element “Place of Publication” the RDA instruction states: “If more than one place of publication appears on the source of information, only the first recorded is required. If all the core elements (that are applicable) are recorded and a resource is still indistinguishable from another resource(s), then additional metadata is necessary. Additional metadata elements, beyond the core, are included based on the necessity for differentiation, policy statements, cataloger’s judgment, and/or local institutional policies. Catalogers should make a proper judgment about what additional elements or multiple values of a single element are necessary to make the catalog record understandable and the cataloged resource discoverable.

Types of Core Elements:
  • RDA Core: Required elements that are always core as prescribed in RDA
  • RDA Core if: Core, if applicable and Core, if the information is available
  • LC Core and LC-PCC Core: Core elements prescribed by LC and PCC in addition to RDA Core and RDA Core if. (Some other institutions also have their own set of core elements)
Examples of RDA Core Elements:

Title
Statement of responsibility
Edition statement
Numbering of serials
  • 2.6.2 Numeric and/or alphabetic designation of first issue or part of sequence (for first or only sequence)
  • 2.6.3 Chronological designation of first issue or part of sequence (for first or only sequence)
  • 2.6.4 Numeric and/or alphabetic designation of last issue or part of sequence (for last or only sequence)
  • 2.6.5 Chronological designation of last issue or part of sequence (for last or only sequence)
  • For more details see: Numbering of Serials in RDA Cataloging
Production statement
Publication statement
Series statement
  • 2.12.2 Title proper of series
  • 2.12.9 Numbering within the series
  • 2.12.10 Title proper of subseries
  • 2.12.17 Numbering within subseries
Identifier for the manifestation
Carrier type
  • 3.3 Carrier type
Extent
  • 3.4 Extent (only if the resource is complete or if the total extent is known)

LC RDA CORE ELEMENTS (combination of RDA “Core” and RDA “Core if” elements plus additional elements)


Used for: RDA Core Elements

Glossary of Library & Information Science

All librarians and information professionals may use information from Glossary of Library & Information Science for their writings and research, with proper attribution and citation. I would appreciate it if you would let me know, too! Please cite as given below:

MLA: Haider, Salman. "Glossary of Library & Information Science." (2015)
Chicago: Haider, Salman. "Glossary of Library & Information Science." (2015)

See also:
Please provide us your valuable feedback in the Guest Book on Contact Us page to make Librarianship Studies & Information Technology Blog a better place for information on Library and Information Science and Information Technology related to libraries. Let us know your review of this definition of Core Elements. You can also suggest edits/additions to this description of Core Elements.

Thanks all for your love, suggestions, testimonials, likes, +1, tweets and shares ....

by Salman Haider (noreply@blogger.com) at September 23, 2016 11:24 PM

September 22, 2016

OCLC Next

Celebrating the first 500 WMS libraries

wms-500

A decade of remarkable change

In 2006, four library system vendors dominated the integrated library system market. OCLC partnered with most and was just beginning to consider its own solution. In the intervening decade, we’ve seen a lot of consolidation and rapid innovation.

Fast-forward to 2016. The ILS is now a legacy system, “next-gen” is practically passé, and Marshall Breeding has dubbed a new breed of library management and discovery services the “Library Service Platform.” Today, OCLC’s WorldShare Management Services (WMS) is one of only two offerings in this space—a true multi-tenant, cloud-based suite of services for managing and discovering the purchased and licensed collections of libraries. It took only five years for OCLC to attract 500 libraries to WMS, becoming a leading provider in a space that it didn’t even occupy a decade ago.

That would be a major achievement in any industry, by any company. That it was achieved by a nonprofit library cooperative is credit to the unique power behind that success—our members.

Imagination + hard work = amazing

Earlier this week, we hosted a “WMS Global Community & User Group Meeting” in Dublin, Ohio. Members from 75 libraries and five countries gathered to talk about their WMS experiences. They shared insights with each other, talked about best practices and provided feedback to OCLC staff. You can see photos and tweets from the event at #WMSglobal.

Perhaps even more importantly, they helped us identify the boundaries of today’s services so that, together, we can imagine what WMS might look like in one, three or ten years.

That combination of imagination and hard work has been a hallmark of the WMS community since day one, when we started the development of WMS with five pilot libraries.

Cooperation yields real impact

In case you’ve never been involved in a “pilot,” it’s a lot like being asked to get somewhere by rolling head-over-heels down a hill and then describe the experience in detail. And then do it again. Exciting, sure, but a lot of extra work, for which we are deeply grateful.

That’s why the WMS community is such an important part of the success of WMS—500+ libraries across six continents all coming together to define and create services based on combined experiences.


WMS 500: This is what global library cooperation looks like.
Click To Tweet


The global meeting was a chance for WMS members to share their experiences. To celebrate how far we’ve come in just a few short years, we shared a short video that touches on a few of the highlights of our journey.

 

What’s your vision for the future?

I’m so proud of what our WMS libraries and OCLC staff have accomplished together. We had a great time at the global community meeting and I hope I can see you at a similar event in the future.

Why? Because so much of what we’ve accomplished is based on the ideas, suggestions and vision of our members. If you have a dream about what library services should look like in one or five or ten years, that’s where you should be.

Question…what do you see in the future for library services? Let us know on Twitter with the hashtag #OCLCnext

The post Celebrating the first 500 WMS libraries appeared first on OCLC Next.

by Andrew K. Pace at September 22, 2016 06:52 PM

TSLL TechScans

NISO Launches New Project to Create a Flexible API Framework for E-Content in Libraries

On August 25, 2016, the National Information Standards Organization (NISO) announced a new project relating to APIs and data about electronic content in libraries.

Full text of the NISO announcement:

Voting Members of the National Information Standards Organization (NISO) have approved a new project to modernize library-vendor technical interoperability to improve the access of digital library content and electronic books. Building upon a set of API (Application Programming Interface) Requirements developed by Queens Library, a new NISO Working Group will create a foundational API set that the library community can build on. This set will fulfill an array of user and library needs, including quicker response times, flexible item discovery and delivery options, improved resource availability, and more seamless integration of electronic and physical resources.

Library patrons should expect an excellent user experience and requisite level of convenience should be built into all customer-facing tools that service library patrons. This project is being undertaken to bring patrons' library experiences in line with the modern tools and technologies-especially mobile technologies-they are accustomed to using in other areas of their lives. Currently, libraries use varied technologies, some of which rely on outdated and slow communication protocols, to provide services to users. By establishing standards on RESTful Web services APIs as well as standard mobile extensions, the library industry will leave many archaic, difficult-to-use tool sets behind, and allow libraries more flexibility in meeting local needs.

"11.2 million patrons visited the Queens Library in 2015," says Kelvin Watson, Chief Operating Officer, Senior Vice President, Queens Library. "It's imperative that we keep them coming back by providing fast, efficient service that rivals what they experience in the commercial world. Queens Library, which serves one of the five most diverse counties in the United States, has a vested interest in undertaking this work to customize library operations for specialized local needs. We are excited to have initiated this project at NISO and we look forward to working with other participants to actualize our draft framework."

Volunteer working group members will deliver a foundational framework, in the form of a NISO Recommended Practice, that will communicate an understanding of how libraries should provide and receive data. These library-related communications and functions could include customized genre or category views for browse, search, and discovery of collections; user authentication; transmission of account information; management of barcodes; check out and return of items, streaming of online material, and other requirements as determined by stakeholders. Work will also include the creation of several proof-of-concept services that use the proposed approach to deliver services and a registry to enable supporting data providers and system vendors to communicate their support of the framework. The full work item approved by NISO Voting Members is available on the NISO website.

NISO's Associate Director of Programs, Nettie Lagace, comments, "NISO is eager to begin this work to improve library-patron interactions. Advancing vendor-library communication processes through consensus discussions and agreement is a natural fit in our portfolio of work. NISO's mission is to streamline the work of libraries and other information providers to get content into the hands of consumers." Lagace continues, "We encourage working group participation from libraries, library system providers, providers and distributors of e-books, recorded books, and other forms of digital content and media. We are looking forward to hearing from interested volunteers who can dedicate their technical talents to this important effort." Those who are interested in participating in the E-Content API Framework working group should contact Lagace at nlagace@niso.org.

by noreply@blogger.com (Emily Dust Nimsakont) at September 22, 2016 03:34 PM

September 15, 2016

025.431: The Dewey blog

Dewey at IFLA 2016

Papers and presentations from the IFLA 2016 Classification & Indexing Satellite Meeting "Subject Access: Unlimited Opportunities," held 11-12 August at the State Library of Ohio, Columbus, OH, are available here. Among them are two that focus on DDC:

Presentations from the International Dewey Users Meeting at IFLA 2016 held 16 August at the Greater Columbus Convention Center are available here. The agenda included:

  • EPC Meeting 139 (summary of decisions from DDC Editorial Policy Committee meeting about changes to DDC)
  • Data-driven development (use of WorldCat data to help identify areas of the DDC schedules needing development)
  • Linking FAST to Wikipedia and Wikidata (FAST [Faceted Application of Subject Terminology], library metadata, and the networked environment)
  • Principles underlying the EDUG recommendations for mapping involving Dewey (European DDC Users Group recommendations and the University of Oslo project of mapping to the Norwegian WebDewey)
  • PANSOFT software developments (separate slides here; at the meeting there was a live demonstration of ccmapper [concept context mapper], a new mapping product that is optimized for mapping subject terms to Dewey numbers, developed by PANSOFT together with the Norwegian Dewey team; and a live demonstration of the new user notification feature being developed for WebDewey)

by Juli at September 15, 2016 08:35 PM