Today, well by now yesterday, I visited a full day Dutch academic libraries seminar on the future of bibliographic control, much discussed in de US, but also here at home in the Netherlands. This session was born out of a comparable one held a year ago. There where 6 speakers. I was much impressed by the contribution of Barend Mons on wiki’s. Not the regular stuff about sharing, the power of distributed knowledge or the quality of Wikipedia. Instead he presentetd big stuff about the Knowlet Technology developed to describe and disclose scientific facts (relations between source and target concepts such as ‘A is related to B’). Working from medline his team distilled millions of facts from the abstracts. Facts are almost all related to each other (because ‘B affects C’ and so on). A Knowlet is a set of related facts around a central concept. Even scientists themselves my be described in terms of knowlets, representing a kind of fingerprint of the scientist’s work. The smart thing is, if I understand Mons well, that the system allows one to find undescribed facts based on collinearity. Another smart thing they did and will do is build all this into wikipages (demo, test version shortly on http://www.wikiprofessional.info/). The ultimate goal is very ambitious but not unrealistic: to describe each and every concept out there (in all languages) in wikipages. Talks are already going on with Jimmy Wales and other hot shots in the information world. For now, these ideas have been developed for biomedical and pharmaceutical research, concentrating on proteins, but there is no resason to restrict it to those fields. Each discipline might have its own Wikiprofessional. They also intend to crawl hundreds of repositories, do author disambiguation in a really smart way and allow scientists to really quickly share main findings of research and be alerted if any other scientist adds new facts to any concept you are interested in. As a result the need for and price of full text journals should go down, Mons expected. We will certainly hear more of this….
I was asked to shed some light on library catalogue enrichment. My presentation focussed on the need to make the search experience in library catalogues (as long as they exist) richer. Lorcan Dempsey has said most that has to be said about catalogues and discovery tools in his post Lifting out the catalog discovery experience almost a year ago. I did a tiny bit of research into the search environment and information types presented in library catalogs. A comparison of some catalogues with Amazon, Google Books, Bol.com, Picarta and Worldcat showed that there is room for improvement. I counted to what extent catalogues support these six goals: identifying, seducing, evaluating, obtaining full text, citing and discovery. In total I identified 56 bits of information or functionality, of which Amazon offered the most. No, indeed, you can’t add these up, but hey, I like figures, so I did anyway:
My suggestions that we need to add searchable tables of contents to books descriptions in our catalouges received some positive nodding. Some Dutch libraries, notibly Delft have already done so. Afterwards there was some discussion about the way forward. Salient detail is the reaction to my questions what would happen if we ceased to add GOO-descriptors to books, now quite common in Dutch libraries and if we were to remove to possibility to use these terms in searches. It remained silent….., some mumbled: ‘probably nothing’……
NVB-WB, the academic libraries chapter in the Dutch libraries association NVB promised to have all presentations (mostly in Dutch) of this day available on their site, but that might take a few days. However, Barend Mons’ presentation is already available (in English), because he did the same story at the NIH Wikifair and on my site you’ll find my catalogue enrichtment story and the data on which that story is based: 56 information items in 11 library and other catalogues.