Paul presented about Coral which is an ERM (Electronic Resource Management) software. Paul told us it is well documented, with an extensive manual. It is software for Librarians to use, not end users. It is for managing all your subscriptions for digital resources. Large universities often have hundreds of subscriptions and tracking licenses and url, and acquisitions etc are very hard to track. Coral was developed to manage this.
It is not packaged, but is still pretty easy to set up. Setting up authentication is the hardest bit, but once you have done that, the rest is pretty easy. It has one big problem it is only available in English and it currently cannot be translated. Hopefully since it is OSS we can fix this issue, but it is a big task.
Coral is divided into modules
- Define all publishers, vendors, partners
- Define all licenses
- Highly flexible
- You can attach files like pdf
- This tracks the subscription
- You can define workflows
- store access details
Paul then demoed the software for us.
My battery on the laptop went flat, so Tom kindly took notes on this session for me so I could write this blog post
Alvet and Ricardo from EBSCO talked about EDS and the integration work they have done with Koha
To start they began by explaining why they had developed this plugin
- NIWA, a research institute in NZ, wanted a simple interface where their users could access discovery services
- An a interface where Koha was the Front end
They then explained what a discovery service is:
- A way to access all of the library’s full-text content (electronic and print) in a single search
- High quality metadata = high quality search results
- Relevancy ranking
- Match on subject heading
- Match on article titles
- Match on other keywords
- Match on keywords in abstracts
- Match on full text
Next the showed us what EDS looks like in its’ native interface so that they could show that it is quite similar in Koha using the plugin
Then they showed what EDS and Koha look like
- showed the NIWA search (Discovery) box, including the detailed list of field codes you can search on.
- Demoed the Catalyst work to use tabs instead of drop-downs to segregate the search options
- When the drop downs are changed a pop-up appears that notifies you of the search mode
- demoed the integration between the Koha catalog and the EDS resources, including patron services, accessing EBSCO resources through the koha interface, limiting the resources searched to what the library has available,
- reserving resources across Koha/Ebsco and cart functionality
- demoed guest access restrictions to show that limited resources will not be displayed unless authorized
- demoed login and authorization methods (userid/passwords, ip addr restrictions, etc)
- demoed search options (options per page, boolean search options, sort, etc.)
- demoed interaction between search results and checkboxes including the functionality that depends on those boxes (more details, etc)
- demoed accessing EBSCO content from the Koha interface (html results, PDF, etc)
- showed that all EBSCO functionality is available through Koha interface in advanced search and that the Koha advanced search can be toggled through tabs/links including add/remove of search
They explained that support is only available in 3.12+ because integration has been implemented as a plug-in. Plugins allows for features to be added quickly through plug-ins in-between release Koha cycles.They then stepped us through how you install and configure the plugin
Improvements for the future
- Support for newer versions of Koha (3.16, 3.18) (ready at the end of October)
- Research Starters
- Multi-facet support
- Looking to enable default language support in Koha and the Ebsco plug-in
Where do I get it
If you google for EDS API Koha, you will find a github page that has the newest version available for download. EBSCO also provide a wiki, an Integration Kit as well as training and help
Brendan with help from Tomas who was translating talked about possibilities for funding the Koha project.
Brendan mentioned we need money for developments, and other things. He covered the process for getting code in, someone submits, another entity signs off, qa checks, then the release manager checks and pushes or not. Out of this whole process, the only part that is directly funded is the initial development of the feature/fix. Once it is passed to community, we rely almost entirely on volunteers.
Developers get paid for new features mostly, because new things are what libraries want to fund, but there are also things under the hood, often called plumbing problems, that they don’t want to fund. Some of these plumbing issues are big pieces of work. Brendan proposes as a community we need to create some kind of community that can collect and disburse money which we can use to fund fixes to these 2 issues.
Brendan showed the Koha dashboard, showing that today 215 patches need to be signed off, and 64 waiting for QA. He covered the fact that most signoffs are done on volunteer basis, and in a conservative estimate we can see we need at least 700 hours to per 6 months to keep up with development.
Two complementary options:
- Add more people to the project to signoff/test
- Create an entity to grab funding which we can pay people
Big features take lots of time to test, very hard for a volunteer to deal with these, so they wait for a while. Also this means that if the code has moved, rebasing needs to be done. So if we can fundraise and have people working on signoffs and QA as their job.
Brendan then talked about stability vs rapid development. Brendan sees that both points are valid and that it is a balancing act. With so many libraries (10,000 ish) it is peoples livelihoods depending on the stability of the product. However we need to continue to innovate as well. So get involved have your ideas presented. Mail the list, come to hackfests, chat on IRC etc. No ideas are bad ideas, only the ones you don’t present.
An audience member asked about the idea of a foundation, Brendan said lets start with a funding organisation, and start small with perhaps a donate now button. But that organisation should not having anything to do with the direction of governance of the project, just collect funds.
Bob and Brendan are going to work on a proposal during hackfest for this funding organisation to present to the community, to get this ball rolling
Next up 4 more students presented on their project.
They started by thanking all the teachers of the different subjects who helped them. In their studies they noticed some issues with the display of ISBD, MARC and normal display in the Koha OPAC.
They found some inconsistencies in the display so they wanted to document these problems so we as a project can be aware and work on the fixes of them.
Them main issues they noticed with display was for:
- Uniform titles
- Items with subordinate items
- Principal author (s)
They showed some screenshots, demonstrating the issues they found. If the slides are online I will link them in when I find them. And if there are bugs filed I will link them in here also.
What is even better they proposed some solutions:
- Add punctuation around subordinate entities
- Redesign the display of the uniform titles
- Display should be Title, responsibility, then uniform title
Next we had 3 second year library science students presenting their work. I hope I summarised their talk accurately.
They mentioned it is still a work in process, and that they are presenting preliminary findings. Their project was on the uptake u of Koha in the Cordoba province. Researching the changes, and impact. They used a generic survey which they did either by email or in person. They surveyed 56 libraries, public, school, academic and specials.
Their results are still preliminary, but they shared what they have with us.
- 13 libraries surveyed so far, 12 universities and 1 public
- 92% of the libraries surveyed implemented Koha
- Administration access, Advantages, price were 2 big reasons why they chose Koha
- The advantages identified 76.92% support and maintenance, 25.08% otros, 7.68% ninguna
More than 90% of librariessurveyed are using Koha and all its modules.
More than 80% of the libraries, librarians have adminstration access
Libraries liked the efficiency in cataloguing, and the flexibilty. They also mentioned the usabiltity and that it is a friendly environment.
The project is still just beginning but there is much more information they can learn.
Next up we had a presentation from the school of Library Science, from 4 first year students.
This was a definite highlight of the conference for me.
Quote of the day in Koha – Quotes from Authors
They talked about how they added quotes from local authors into Koha to help promote reading, using the Quote of the Day Koha function (developed by Chris Nighswonger). This helps to to recognize the local authors and promote the local culture. Literature from Cordoba is very vast and varied. So they selected both classic and contemporary authors, from famous and not so famous authors.
How did they do this?
They first looked at what books people had in their homes, then they searched the library catalogues. They chose the quotes but reading together and working as a group to decide which quotes they would use. They then put a file of all the quotes which you can download and use in Koha at http://www.puntobiblio.com
They then showed us a live demo in Koha of how to edit the quotes of the day. The project allowed them to build a team and to learn more about the local culture and special places. When they started they knew nothing about Koha, but they learnt fast.
It always makes me day when I see features that were developed to suit a single library’s needs being used by others.
First up on day 3 was Zeno Tajoli talking about how to make the MARC frameworks translatable. He made the point that while you can have multiple languages enabled for most of Koha, in the frameworks you can only have one, so while it is translatable it can only be in one language at once. Other problem areas are authorised values, calendar etc. Zeno then showed how we can translate the files, a lot need to be translated by copying and editing the .sql files themselves. Thanks to Bernardo we can now translate the frameworks using the translation tool at http://translate.koha-community.org
But we still have the problem of only one language being able to be used at once. Can we fix this?
Next up Galen remoted in to talk about using Koha with linked data.
He talked about what we can do right now with linked data, starting by explaining how the current record based view of the world works. He showed a record in MARC21 and then same record in Bibframe.
He then showed how you can map it to an identifier. But the problem is still that it is massive mindshift to a linked data view of the world from the MARC centric view of the world.
- Change is hard
- Change is unpredictable
- Change is expensive
- Change is time consuming
Libraries often adopt new technologies too soon, for example
- 1968 MARC
- 1969 GML invented
- 1978 first SGML
- 1998 XML
If we were to start now, we wouldn’t invent MARC
So what can we do, well one option is BURN THE WORLD DOWN and start again. Or we could do the proprietary vendor game, or wait a few years for somebody to tell us how to solve all our problems. But we could also do what Oslo public is doing and being prototyping/experimenting now.
So what we can do right now? Embrace incremental change, take advantage of the fact MARC tools are improving, and let the authority records save us.
So in Koha we can link authority records to biblio records right now, using the $9 subfield. We could also link instead to a global identifier, via a URI which then becomes a link to a RDF identifier. This means we can link to other sources such as VIAF.
He put his slides up, which contain the examples he shows, again it is hard to explain in words.
- Catmandu RDF and MARC
- Linked data RFC for Koha
Arnaud talked about how his company AFI changed from a proprietary to Open Source model. AFI merged with Biblibre about 6 months ago. Now people from both companies work on Bokeh.
Bokeh is designed to offer librarians tools to allow managing the digital environment. It runs on top of any ILS, using ftp and ILSDI.
- Enriched OPAC
- Content aggregator
- Digital library
Bokeh can automatically harvest data from the internet and add it to your bibliographic eg wikipedia entries, book covers dvd trailers etc, with no librarian intervention. The CMS is very simple to administer. It can also aggregate content so you can display your catalogue, your resources from OAI or VOD and other catalogues. Bokeh also functions as a digital library allowing you to store your texts, video, images, audio etc. The idea of having it aggregated is to allow for better patron services. Some examples are
- Personalised mobile services
- Realtime interaction with social media
You can also use Bokeh as a conference software, to provide and collect information from delegates. Another idea is allow multi channel communication, different displays/interfaces for different devices. Bokeh can also simulate FRBR using an automatic detection algorithm to group manifestations of the same work. Another thing Bokeh tries to do is use words the patrons understand, not library jargon. Bokeh offers tags, built on the fly from the search result, a prettier way of offering a facet.
Bokeh is multilingual, but French is the main language. Only French libraries use it currently. Arnaud showed a bunch of examples, which you probably need to see, it’s too hard to try to explain in words.
Arnaud has put up his slides here