Planet Cataloging

August 19, 2017

Terry's Worklog

MarcEdit 6.3 Updates (all versions)

I spent sometime this week working on a few updates for MarcEdit 6.3.  Full change log below (for all versions).

Windows/Linux/MacOS:

* Bug Fix: MarcEditor: When processing data with right to left characters, the embedded markers were getting flagged by the validator.
* Bug Fix: MarcEditor: When processing data with right to left characters, I’ve heard that there have been some occasions when the markers are making it into the binary files (they shouldn’t).  I can’t recreate it, but I’ve strengthen the filters to make sure that these markers are removed when the mnemonic file format is saved.
* Bug Fix: Linked data tool:  When creating VIAF entries in the $0, the subfield code can be dropped.  This was missed because viaf should no longer be added to the $0, so I assumed this was no longer a valid use case.  However local practice in some places is overriding best practice.  This has been fixed.

A note on the MarcEditor changes.  The processing of right to left characters is something I was aware of in regards to the validator – but in all my testing and unit tests, the data was always filtered prior to compiling the data.  These markers that are inserted are for display, as noted here: http://blog.reeset.net/archives/2103.  However, on the pymarc list, there was apparently an instance where these markers slipped through.  The conversation can be found here: https://groups.google.com/forum/#!topic/pymarc/5zxuOh0fVuc.  I posted a long response on the list, but I think i t’s being held in moderation (I’m a new member to the list), but generally, here’s what I found.  I can’t recreate it, but I have updated the code to ensure that this shouldn’t happen.  Once a mnemonic file is saved (and that happens prior to compiling), these markers are removed from the file.  I guess if you find this isn’t the case, let me know.  I can add the filter down into the MARCEngine level, but I’d rather not, as there are cases where these values may be present (legally)…this is why the filtering happens in the Editor, where it can assess their use and if the markers are present already, determine if they are used correctly.

Downloads can be picked up through the automated update tool, or via http://marcedit.reeset.net/downloads.

–tr

by reeset at August 19, 2017 04:19 PM

August 14, 2017

025.431: The Dewey blog

Sustainability and T1—0286 Green technology (Environmental technology)

At its Meeting 140, EPC approved changes to clarify (1) how works on sustainability should be classed and (2) how T1—0286 Green technology (Environmental technology) should be used. The changes are now in WebDewey.

Notes have been added in several places to make clear that the interdisciplinary number for sustainability is 304.2 Human ecology, e.g., the class-here note at 304.2:

Class here ecological anthropology, human geography; interdisciplinary works on sustainability

A new scatter see reference has been added at 304.2 to make clear that specific social aspects of sustainability not provided for under 304.2 are classed with the aspect in 300, without use of notation T1—0286:

For a specific social aspect of sustainability not provided for here, see the aspect in 300, without use of notation T10286 from Table 1, e.g., works on sustainability that emphasize conservation and protection of natural resources 333.72, economic aspects of sustainable development 338.927

Many broad works about sustainability are classed in subdivisions of 330 Economics, e.g., 333.72 Conservation and protection [of natural resources]; however, the broadest works about sustainability include social topics not limited to economics. For example, among the United Nations Sustainable Development Goals are “quality education”; “gender equality”; and “peace, justice, and strong institutions.”

The class-here note at T1—0286 Green technology (Environmental technology) makes clear that T1—0286 should be used for sustainable technology:

Class here environmental engineering (environmental health engineering, environmental protection engineering), sustainable engineering (sustainable technology)

But a scatter class-elsewhere note specifies that T1—0286 should not be used for social aspects of sustainable technology:

Class social aspects of green technology with the aspect in 300 without use of notation T10286 from Table 1, e.g., economics of sustainable development through use of green technology 338.927

The obvious area for T1—0286 to be used is in 600 Technology (Applied sciences); however, its use is not limited to the 600s.  For example, works about sustainable technology in manufacturing textiles (677 Textiles) are classed in the built number 677.00286 Green technology (Environmental technology).  Similarly, T1—0286 can be added to 687 Clothing and accessories to build 687.0286 for works about sustainable technology in manufacturing clothing. Comprehensive works on sustainable manufacturing of textiles and clothing are classed in 677.00286. Works classed in 677.00286 and 687.0286 treat topics like eco-friendly materials (fibers, textiles, dyes), processes that minimize waste, and the full life-cycle of the product (manufacture to use to recycling). Notation T1—0286 can also be added to 746.92 Costume [arts; has class-here note: Class here fashion design]. The built number 746.920286 Costume—green technology can be used for works that call on the fashion designer to consider similar technological issues as works classed in 677.00286 and 687.0286, in order to produce sustainable fashion. But T1—0286 cannot be added for works that emphasize social and economic aspects of manufacturing textiles or clothing or the fashion industry.

Here are examples of works that can be classed in the numbers under discussion. [Note: perhaps because people have been unsure whether notation T1—0286 could be added to 746.92, we could not find an example of the built number 746.920286; but we found works for which that number would be appropriate.]

304.2 Human ecology

The give and take of sustainability: Archaeological and anthropological perspectives on tradeoffs

333.72 Conservation and protection [of natural resources]

Natural resource conservation: Management for a sustainable future

338.4774692 Costume industry [fashion industry; built with 338.47 plus 74692]

Overdressed: The shockingly high cost of cheap fashion

338.927 Appropriate technology

Creating a sustainable economy: An institutional and evolutionary approach to environmental policy

640.286 Home management—-green technology

Green wizardry: Conservation, solar power, organic gardening, and other hands-on skills from the appropriate tech toolkit

677.00286 Green technology (Environmental technology) [textile manufacturing]

Handbook of Life Cycle Assessment (LCA) of textiles and clothing

746.920286 Costume—green technology [fashion design; built with 746.92 plus T1—0286]

Shaping sustainable fashion: Changing the way we make and use clothes

See also earlier blog post about sustainable living.

by Juli at August 14, 2017 08:58 PM

August 11, 2017

TSLL TechScans

Ebook collection analysis

Two publications recently came across my desk: the May/June 2017 Library Technology Reports called Applying Quantitative Methods to E-Book Collections by Melissa J. Goertzen, and the June 2017 issue of Computers in Libraries called Ebooks Revisited. This suggests that as ebooks continue to be a large collection issue for libraries on various levels (platforms, pricing, patron-drive acquisition (PDA) and demand-driven acquisition (DDA), discovery records, etc.) we are reaching a point where we can more fully evaluate the long-term impact they are having on our patrons and our budgets. I was particularly interested in the Computers in Libraries article called Ebook ROI: A Longitudinal STudy of Patron-Driven Acquisition Models by Yin Zhang and Kay Downey. The authors work at Kent State University Libraries and have been using a PDA program for five years now; they were able to use this long-term data to evaluate the usefulness of short term loans, determine if PDA purchases continue to be used after the purchase is triggered, and and analyze what books from various publication years and subject areas are purchased under their PDA profile. I found this study inspiring; we have only had our DDA program for less than one year, but I hope to conduct a similar analysis after a full year of the program and regularly thereafter so we can be sure our patrons are finding the program useful.

by noreply@blogger.com (Anna Lawless-Collins) at August 11, 2017 03:29 PM

August 10, 2017

Terry's Worklog

MarcEdit 7 Z39.50/SRU Client Wireframes

One of the appalling discoveries when taking a closer look at the MarcEdit 6 codebase, was the presence of 3(!) Z39.50 clients (all using slightly different codebases.  This happened because of the ILS integration, the direct Z39.50 Database editing, and the actual Z39.50 client.  In the Mac version, these clients are all the same thing – so I wanted to emulate that approach in the Windows/Linux version.  And as a plus, maybe I would stop (or reduce) my utter distain at having support Z39.50 generally, within any library program that I work with. 

* Sidebar – I really, really, really can’t stand working with Z39.50.  SRU is a fine replacement for the protocol, and yet, over the 10-15 years that its been available, SRU remains a fringe protocol.  That tells me two things:

  1. Library vendors generally have rejected this as a protocol and there are some good reason for this…most vendors that support (and I’m thinking specifically about ExLibris), use a custom profile.  This is a pain in the ass because the custom profile requires code to handle foreign namespaces.  This wouldn’t be a problem if this only happened occasionally, but it happens all the time.  Every SRU implementation works best if you use their custom profiles.  I think what made Z39.50 work, is the well-defined set of Bib-1 attributes.  The flexibility in SRU is a good thing, but I also think it’s why very few people support it, and fewer understand how it actually works.
  2. That SRU is a poor solution to begin with.  Hey, just like OAI-PMH, we created library standards to work on the web.  If we had it to do over again, we’d do it differently.  We should probably do it differently at this point…because supporting SRU in software is basically just checking a box.  People have heard about it, they ask for it, but pretty much no one uses it.

By consolidating the Z39.50 client code, I’m able to clean out a lot of old code, and better yet, actually focus on a few improvements (which has been hard because I make improvements in the main client, but forget to port them everywhere else).  The main improvements that I’ll be applying has to do with searching multiple databases.  Single search has always allowed users to select up to 5 databases to query.  I may remove that limit.  It’s kind of an arbitrary one.  However, I’ll also be adding this functionality to the batch search.  When doing multiple database searches in batch, users will have an option to take all records, the first record found, or potentially (I haven’t worked this one out), records based on order of database preference. 

Wireframes:

Main Window:

image

Z39.50 Database Settings:

image

SRU Settings:

image

There will be a preferences panel as well (haven’t created it yet), but this is where you will set proxy information and notes related to batch preferences.  You will no longer need to set title field or limits, as the limits are moving to the search screen (this has always needed to be variable) and the title field data is being pulled from preferences already set in the program preferences.

One of the benefits of making the changes is that this folds the z39.50/sru client into the Main MarcEdit application (rather than as a program that was shelled to), which allows me to leverage the same accessibility platform that has been developed for the rest of the application.  It also highlights one of the other changes happening in MarcEdit 7.  MarcEdit 6- is a collection of about 7 or 8 individual executables.  This makes sense in some cases, less sense in others.  I’m evaluating all the stand-alone programs and if I replicate the functionality in the main program, then it means that while initially, having these as separate program might have been a good thing, the current structure of the application has changed, and so the code (both external and internal) code needs to be re-evaluated and put in one spot.  In the application, this has meant that in some cases, like the Z39.50 client, the code will move into MarcEdit proper (rather being a separate program called mebatch.exe) and for SQL interactions, it will mean that I’ll create a single shared library (rather than replicating code between three different component parts….the sql explorer, the ILS integration, and the local database query tooling).

Questions, let me know.

–tr

by reeset at August 10, 2017 05:37 PM

August 08, 2017

025.431: The Dewey blog

New numbers for martial arts topics

Since last year, the editorial team has published two rounds of updates to the schedules for martial arts. These changes were approved by EPC at both Meetings 139 and 140, and are now live in WebDewey.

Martial arts are found at 796.8 Combat sports (combat sports and martial arts are synonymous in Dewey).

Figure1

We've authorized 796.85 Armed combat, continuing armed combat from 796.8. It has two children: 796.852 Knife fighting and 796.855 Stick fighting. Fencing has been continued from 796.86 to 796.862, leaving the scope of 796.86 Sword fighting clearer.

Because some people would interpret "martial arts" as particularly suggesting martial arts forms with Asian origins, a class-elsewhere note at 796.8 points us to 796.815 Oriental martial arts forms.

Figure2

The scope note for 795.815 tells us it is "limited to martial arts forms originating in, or in styles characteristic of, Eastern, Southern, and Southeast Asia." What matters here is where the martial arts form itself originated, not where it’s being practiced. So a work like Al Weiss' The official history of karate in America: the golden age, 1968-1986 would belong at 796.8153097309047, built with 796.8153 Karate, plus standard subdivision T1—09 History, geographic treatment, biography, plus notation 73 United States from Table 2, plus notation 09047 1970-1979 from Table 1, both following the instructions at T1—093-099. A work on a martial arts form practiced in Eastern, Southern, or Southeast Asia, but originating elsewhere, would class with the form.

The interdisciplinary number for martial arts is 796.8, but works with a disciplinary focus outside of sports and recreation should be classed elsewhere. We recently updated coverage of martial arts in the physical fitness schedules, at 613.7148 Exercises from the martial arts traditions and related traditions. We continued exercises from specific combat sports to a new span, 613.71481-613.71486. It has the following add instruction:

Figure3

So a work like The everything Krav Maga for fitness book: get fit with this high-intensity martial arts workout! belongs at 613.71481, built with 613.7148, plus notation 1 from 796.81 Unarmed combat. Since this span builds from the numbers at 796.8, development for a new martial arts form means there will automatically be a number for the form practiced for physical fitness too. Indeed, we’ve authorized a new martial arts number recently:

Figure4

The new number, 796.817 Kickboxing, is suitable for works like The kickboxing handbook. Since kickboxing is frequently practiced for exercise and fitness, expect to see 613.714817 (built with 613.7148, plus notation 7 from 796.817) used for works like Kick your way to fitness: the fastest way to lose weight and stay in shape. You may see works on kickboxing classed elsewhere previously, such as 796.83 Boxing. But kickboxing is a different sport than boxing, which by definition involves punching. Having found literary warrant for it, we decided authorizing a number for it would be the best way to address confusion.

by Alex at August 08, 2017 06:55 PM

OCLC Next

Come on in, the water’s fine

2017-08-03-Water's-fine-final

In the summer of 2016, I received a phone call from OCLC asking if I’d be interested in becoming one of the first early adopters for a service that would be replacing ILLiad. It would be an enhanced WorldShare ILL system that would include many of the unique features of ILLiad.

Move away from ILLiad? And do so at the “bleeding edge” of a new service? And being not much of a techie, the idea of changing any computer-based system always seems like a challenge. At that very moment, the idea seemed overwhelming and, frankly, hugely unsettling.

After giving it some thought, though, I considered that I actually like new challenges. The Interlibrary Loan office was slowing down a bit as the summer wore on, too. And it occurred to me that if all ILLiad libraries would eventually need to change, I’d rather be part of the first cohort with all the OCLC tech support behind me. I also thought that being involved in an early adopter program like this might be both professionally challenging and fun. So I said, “Yes!”

Diving in

On September 1, 2016, the first cohort began their implementation of Tipasa. It has been a journey of fits-and-starts (remember my lack of tech skills), but we emerged with a new platform that features a clean, simple interface that can easily be shared by all interlibrary loan workers in our library.

Many of our workflows have been simplified, too. We used this migration opportunity to finally implement patron Lightweight Directory Access Protocol (LDAP) authentication, which caused some delays, as both Immaculata and OCLC needed to work out some wrinkles. We could have kept our manual authentication, but with great IT support on our campus, I believed the time had come to upgrade that function.

One lap at a time

Through the use of interactive webinars, emails and phone calls, we learned all about the new system. It was surprising to me how the six libraries in our cohort had such divergent workflows. That we all were using ILLiad in such different ways meant that Tipasa had to be just as adaptable. The OCLC team listened as we objected to a missing function here, protested a change there or recommended a brand new idea. OCLC also set up a Community Center page so that we could interact with and support each other. We also used an “Enhancement page” where we could post our desires for future improvements.


Being involved in an early adopter program was both professionally challenging and fun.
Click To Tweet


We are finished with our migration and things are running smoothly in Tipasa. The change was seamless for our patrons, even though we launched during a very busy time in the new spring semester.

While there are several functions we need that are not yet available in Tipasa, OCLC has committed to listening to us as we lobby for missing items. Some of those were delivered in the first big upgrade in May, I’m pleased to say.

Calming the waters

As a past chairperson of the Interlibrary Loan committee of our library consortium, I brought ILLiad to 11 of our libraries through consortia pricing and a Library Services and Technology Act (LSTA) grant. For that reason, I felt a responsibility to those libraries to help them through the transition to a new product.

Once we were fully operational, we held a video conference meeting on our campus, which we recorded for future reference. My objective was to calm any fears and encourage others to embrace this change. The event even attracted libraries from beyond our consortium. Clearly, Tipasa is a hot topic!

I gave a brief overview of how Tipasa works by actually using it. I pulled up my account, opened some requests, spent some time in the configuration pages (which are so much easier and customizable than ILLiad!) and fielded lots of questions.

I believe I accomplished my objective. And I also made myself available to those other libraries as they begin their own transitions.

Smooth sailing

ILLiad is now a distant memory for my library as we have become comfortably settled with Tipasa. It certainly had challenging moments, but I always felt supported by OCLC. With lots of humor and good communication, we did it! I learned so much from other libraries’ workflows, presented at a web conference for the first time and I now know more than I’d ever thought possible about authentication.

Being part of an OCLC early adopter program was a voyage of discovery for me. Challenging at times, yes, but in a good way. I’d recommend the experience to anyone looking for ways to learn more about how OCLC staff and members work together on new products and services. And, it was gratifying to know that our library could make a broader contribution to the library community; our feedback has resulted in a stronger product for our peers, both today and into the future.

The post Come on in, the water’s fine appeared first on OCLC Next.

by Carla G. Sands at August 08, 2017 05:04 PM

Coyle's InFormation

On reading Library Journal, September, 1877

Of the many advantages to retirement is the particular one of idle time. And I will say that as a librarian one could do no better than to spend some of that time communing with the history of the profession. The difficulty is that it is so rich, so familiar in many ways that it is hard to move through it quickly. Here is just a fraction of the potential value to be found in the September issue of volume two of Library Journal.* Admittedly this is a particularly interesting number because it reports on the second meeting of the American Library Association.

For any student of library history it is especially interesting to encounter certain names as living, working members of the profession.



Other names reflect works that continued on, some until today, such as Poole and Bowker, both names associated with long-running periodical indexes.

What is particularly striking, though, is how many of the topics of today were already being discussed then, although obviously in a different context. The association was formed, at least in part, to help librarianship achieve the status of a profession. Discussed were the educating of the public on the role of libraries and librarians as well as providing education so that there could be a group of professionals to take the jobs that needed that professional knowledge. There was work to be done to convince state legislatures to support state and local libraries.

One of the first acts of the American Library Association when it was founded in 1876 (as reported in the first issue of Library Journal) was to create a Committee on Cooperation. This is the seed for today's cooperative cataloging efforts as well as other forms of sharing among libraries. In 1877, undoubtedly encouraged by the participation of some members of the publishing community in ALA, there was hope that libraries and publishers would work together to create catalog entries for in-print works.
This is one hope of the early participants that we are still working on, especially the desire that such catalog copy would be "uniform." Note that there were also discussions about having librarians contribute to the periodical indexes of R. R. Bowker and Poole, so the cooperation would flow in both directions.

The physical organization of libraries also was of interest, and a detailed plan for a round (actually octagonal) library design was presented:
His conclusion, however, shows a difference in our concepts of user privacy.
Especially interesting to me are the discussions of library technology. I was unaware of some of the emerging technologies for reproduction such as the papyrograph and the electric pen. In 1877, the big question, though, was whether to employ the new (but as yet un-perfected) technology of the typewriter in library practice.

There was some poo-pooing of this new technology, but some members felt it may be reaching a state of usefulness.


"The President" in this case is Justin Winsor, Superintendent of the Boston Library, then president of the American Library Association. Substituting more modern technologies, I suspect we have all taken part in this discussion during our careers.

Reading through the Journal evokes a strong sense of "le plus ça change..." but I admit that I find it all rather reassuring. The historical beginnings give me a sense of why we are who we are today, and what factors are behind some of our embedded thinking on topics.


* Many of the early volumes are available from HathiTrust, if you have access. Although the texts themselves are public domain, these are Google-digitized books and are not available without a login. (Don't get me started!) If you do not have access to those, most of the volumes are available through the Internet Archive. Select "text" and search on "library journal". As someone without HathiTrust institutional access I have found most numbers in the range 1-39, but am missing (hint, hint): 5/1880; 8-9/1887-88; 17/1892; 19/1894; 28-30/1903-1905; 34-37;1909-1912. If I can complete the run I think it would be good to create a compressed archive of the whole and make that available via the Internet Archive to save others the time of acquiring them one at a time. If I can find the remainder that are pre-1923 I will add those in.

by Karen Coyle (noreply@blogger.com) at August 08, 2017 01:54 PM

August 06, 2017

Terry's Worklog

MarcEdit 7 Alpha: the XML/JSON Profiler

Metadata transformations can be really difficult.  While I try to make them easier in MarcEdit, the reality is, the program really has functioned for a long time as a facilitator of the process; handling the binary data processing and character set conversions that may be necessary.  But the heavy lifting, that’s all been on the user.  And if you think about it, there is a lot of expertise tied up in even the simplest transformation.  Say your library gets an XML file full of records from a vendor.  As a technical services librarian, I’d have to go through the following steps to remap that data into MARC (or something else):

  1. Evaluate the vended data file
  2. Create a metadata dictionary for the new xml file (so I know what each data element represents)
  3. Create a mapping between the data dictionary for the vended file and MARC
  4. Create the XSLT crosswalk that contains all the logic for turning this data into MARCXML
  5. Setup the process to move data between XML=>MARC

 

All of these steps are really time consuming, but the development of the XSLT/XQuery to actually translate the data is the one that stops most people.  While there are many folks in the library technology space (and technical services spaces) that would argue that the ability to create XSLT is a vital job skill, let’s be honest, people are busy.  Additionally, there is a big difference between knowing how to create an XSLT and writing a metadata translation.  These things get really complicated, and change all the time (XSLT is up to version 3), meaning that even if you’ve learned how to do this years ago, the skills may be stale or not translate into the current XSLT version.

Additionally, in MarcEdit, I’ve tried really hard to make the XSLT process as simple and straightforward as possible.  But, the reality is, I’ve only been able to work on the edges of this goal.  The tool handles the transformation of binary and character encoding data (since the XSLT engines cannot do that), it uses a smart processing algorithm to try to improve speed and memory handling while still enabling users to work with either DOM or Sax processing techniques.  And I’ve tried to introduce a paradigm that enables reuse and flexibility when creating transformations.  Folks that have heard me speak have likely heard me talk about this model as a wheel and spoke:

image

The idea behind this model is that as long as users create translations that map to and from MARCXML, the tool can automatically enable transformations to any of the known metadata formats registered with MarcEdit.  There are definitely tradeoffs to this approach (for sure, doing a 1-to-1, direct translation would produce the best translation, but it also requires more work and users to be experts in the source and final metadata formats), but the benefit from my perspective is that I don’t have to be the bottleneck in the process.  Were I to hard-code or create 1-to-1 conversions, any deviation or local use within a spec, would render the process unusable…and that was something that I really tried to avoid.  I’d like to think that this approach has been successful, and has enabled technical services folks to make better use of the marked up metadata that they are provided.

The problem is that as content providers have moved more of their metadata operations online,  a large number have shifted away from standards-based metadata to locally defined metadata profiles.  This is challenging because these are one off formats that really are only applicable for a publisher’s particular customers.  As a result, it’s really hard to find conversions for these formats.  The result of this, for me, are large numbers of catalogers/MarcEdit users asking for help creating these one off transformations…work that I simply don’t have time to do.  And that can surprise folks.  I try hard to make myself available to answer questions.  If you find yourself on the MarcEdit listserv, you’ll likely notice that I answer a lot of the questions…I enjoy working with the community.  And I’m pretty much always ready to give folks feedback and toss around ideas when folks are working on projects.  But there is only so much time in the day, and only so much that I can do when folks ask for this type of help.

So, transformations are an area where I get a lot of questions.  Users faced with these publisher specific metadata formats often reach out for advice or to see if I’ve worked with a vendor in the past.  And for years, I’ve been wanting to do more for this group.  While many metadata librarians would consider XSLT or XQuery as required skills, these are not always in high demand when faced with a mountain of content moving through an organization.  So, I’ve been collecting user stories and outlining a process that I think could help: an XML/JSON Profiler.

So, it’s with a lot of excitement, that I can write that MarcEdit 7 will include this tool.  As I say, it’s been a long-term coming; and the goal is to reduce the technical requirements needed to process XML or JSON metadata.

XML/JSON Profiler

To create this tool, I had decide how users would define their data for mapping.  Given that MarcEdit has a Delimited Text Translator for converting Excel data to MARC, I decided to work form this model.  The code produced does a couple of things:

  1. It validates the XML format to be profiled.  Mostly, this means that the tool is making sure that schema’s are followed, namespaces are defined and discoverable, etc.
  2. Output data in MARC, MARCXML, or another XML format
  3. Shifts mapping of data from an XML file to a delimited text file (though, it’s not actually creating a delimited text file).
  4. Since the data is in XML, there is  a general assumption that data should be in UTF8.

 

Users can access the Wizard through the updated XML Functions Editor.  Users open MARC Tools and select Edit XML function list, and you see the following:

image

I highlighted the XML Function Wizard.  I may also make this tool available from the main window.  Once selected, the program walks users through a basic reference interview:

Page 1:

image

 

From here, users just need to follow the interview questions.  User will need a sample XML file that contains at least one record in order to create the mappings against.  As users walk through the interview, they are asked to identify the record element in the XML file, as well as map xml tags to MARC tags, using the same interface and tools as found in the delimited text translator.  Users also have the option to map data directly to a new metadata format by creating an XML mapping file – or a representation of the XML output, which MarcEdit will then use to generate new records.

Once a new mapping has been created, the function will then be registered into MarcEdit, and be available like any other translation.  Whether this process simplifies the conversion of XML and JSON data for librarians, I don’t know.  But I’m super excited to find out.  This creates a significant shift in how users can interact with marked up metadata, and I think will remove many of the technical barriers that exist for users today…at least, for those users working with MarcEdit.

To give a better idea of what is actually happening, I created a demonstration video of the early version of this tool in action.  You can find it here: https://youtu.be/9CtxjoIktwM.  This provides an early look at the functionality, and hopefully help provide some context around the above discussion.  If you are interested in seeing how the process works, I’ve posted the code for the parser on my github page here: https://github.com/reeset/meparsemarkup

Do you have questions, concerns?  Let me know.

 

–tr

by reeset at August 06, 2017 07:06 PM