Archive for the ‘xISBN’ Category

h1

An interesting idea for using xISBN information

2006-06-20

I was discussing xISBN and FRBR with my colleague and friend Felicia Berke, who made a great suggestion:

now the question is…can a patron place a hold for the entire cluster?
say, a child who has been assigned to read "to kill a mockingbird" and doesn't care which edition?

People, this is why we need to stop discriminating against people who don't have an MLS. (In fact, if you know someone who might want to talk to her about a library-school fellowship or a job, pass this link along!)

I explained that the current version of xISBN doesn't distinguish between To Kill a Mockingbird and Matar un ruisenor (the Spanish translation). For a good example, see DaveyP's mashup of my xISBN responder against Amazon.

Yes, there are several acceptable copies of Harry Potter and the Prisoner of Azkaban, but there are also a few copies of the Spanish version. And a more complete database would probably list the Braille edition, which may technically be in English but is probably encoded thoroughly enough to confound anyone who's only read Braille for the Sighted. (If your only exposure to Braille is through bookmarks that show the Braille for each letter of the English alphabet, keep in mind that "real" Braille shortens the physical length of the content with various rules.)

But her idea remains an important one. I'd love to see a web catalog that says "There are 281 people ahead of you in the line. We also have large print, unabridged CD, and abridged tape copies, as well as the Spanish translation. Would any of those suit your needs?"

In order to do this, however, we need more information about the related ISBNs. Perhaps a future version of xISBN will make it easier to do this. Maybe I just need to be querying a different service to find out what format, edition, and language a particular ISBN represents. Any suggestions?

Advertisements
h1

Our xISBN cache is a small subset of OCLC’s database

2006-06-19

This clarification will, I hope, relax some folks at OCLC who were concerned about the possibility that I might be allowing their competitors to have access to a copy of their xISBN database.

For the record, then:

If there are 60 ISBNs listed for Harry Potter and the Order of Fries, and only 12 of those ISBNs appear within bib records in our database, then our cache will only store those 12 ISBNs. It doesn't tell the requester that there may be other relevant ISBNs; vendors and developers who want accurate and complete cross-references should be talking to OCLC (specifically their Openly division) about using their service. Our cached subset is tailored to store only as much information as would be useful to us.

Eric Hellman of OCLC Openly has offered the suggestion that I may want to cache all 60 ISBNs so that our cache could return a list of the 12 relevant ISBNs in response to a query for a 13th. If we do so, I'll restrict non-local access so that the server refuses to give any information to requesters outside of TBLC. (I've already added a robots.txt file at OCLC's request.)

He has also mentioned that my tinkering has generated a bit of interest within OCLC. This shouldn't have surprised me as much as it did; I've also been talking with people from OCLC PICA about it. Still, it's welcome news. I'm likely to be relocating in about a year, and I'm more enthusiastic about my family's prospects when I reflect that my name is not completely unknown.

h1

Our local xISBN cache is working

2006-06-09

Using the format of OCLC's xISBN service, I've built a local xISBN responder. You can feed it any ISBN you like. As with OCLC's version, if it's a valid ISBN that's in our catalog, you'll get an XML-formatted list of all related items in our catalog.

For example:

http://helpdesk.tblc.org/xisbn/0439136350

returns this:

<?xml version="1.0" encoding="UTF-8" ?>
  <idlist>
    <isbn>0439136350</isbn>
    <isbn>0439136369</isbn>
    <isbn>043965548X</isbn>
    <isbn>0786222743</isbn>
    <isbn>0807282316</isbn>
    <isbn>0807282316</isbn>
    <isbn>0807282324</isbn>
    <isbn>0807283150</isbn>
    <isbn>0807286028</isbn>
    <isbn>8478885196</isbn>
    <isbn>8478886559</isbn>
  </idlist>

This makes me ridiculously happy. Now I can learn a bit of Ajax that will query my xISBN responder and construct links to related items.

Caveats:

  1. I didn't bother writing XHTML responses, so adding .html to the URL will not do anything interesting.
  2. If you ask for an ISBN that's not in our catalog, you won't get very useful results.

Still… not bad for a day's work. What do you think, sirs?

h1

Creating a locally relevant xISBN cache

2006-05-25

 

has a neat service () that will tell you what ISBNs are associated with other editions of a given work. For example, if you ask it about the ISBN representing the hardcover edition, it will tell you the ISBNs for the paperback, the large print edition, the Braille edition, and so on.I have a database that contains a slice of our database. It includes every in our catalog.

Yesterday I wrote a script that goes through a list of all the ISBNs in our catalog and asks the xISBN server what other ISBNs are associated with each one. Then it compares each resulting list of ISBNs to the ones already on our system. It assigns a group ID to each cluster so that I can later write a script that does a sort of localized xISBN query on the cached results. This way we can get only the ISBNs relevant to our catalog so that we can generate an Amazon-like list of references to other versions in our catalog. In fact, just for the sake of standards (and bloody-mindedness) I think I'll write an xISBN interface for our little cached slice of the world.

Our , 's , is reasonably customizable, but I think this goes beyond what they planned for. So, following 's advice, I'll learn some AJAX to inject the links into the catalog's web page.

To paraphrase Zippy the Pinhead: Are we yet?

I started the script yesterday at 6pm. It's still running (186,000 records have been associated with groups so far), and OCLC hasn't threatened me with any specific harm yet, so I'm cautiously optimistic.

Oh, and since Technorati doesn't seem to like my , I've decided to move all library-related blogging to this site. Here's hoping this finally shows up on the code4lib blogroll.