One of the odd things about a community archiving system is that it is used really in fits and starts. Much of the time, it may simply sit there, with peaks of activity coming when volunteers have time to carry out archivists tasks, or training is carried out. The Assynt Community Digital Archive has four workstations, and these, too, are not used on a daily basis, but when they are used, they are used fairly intensively. So if something on the systems is not quite working correctly, it may be some time before anyone notices.
This has been the case just recently. As reported earlier, the Assynt Field Club has an archiving project under way at the moment. The appointed archivist, Avril Haines, is doing a great job curating the physical material, digitising, describing and uploading the information the Field Club are currently interested in. But as she is using the systems little hassles creep in. An example is the scanning application. From time to time, the "Save" button needs to be clicked twice. Now there is almost certainly a reason or explanation of this strange outcome, but usually technical problem solving relies on consistency and replicability, the need for the issue to arise every time and the ability to make the problem occur to be able to solve it. This is much more difficult when the systems are only in use from time to time.
One should probably focus on the fact that in spite of the infrequent use, the systems are remarkably reliable, and the issues that have occurred are more like hassles than show-stoppers. It would be a problem if users of the systems became concerned that, for example, a system would not start, or expected software was not available, and none of these occur. Some little events are also due to unfamiliarity with software. For example, recent versions of the Mozilla Firefox web browser come up with a message on startup that may sound alarming, alerting the user that the previous pages and tabs were shut down and asking if the pages and tabs should be restored. If you're not used to Firefox that could be interpreted as an error (and even if you are used to Firefox, it can be very frustrating to have the same message every time you start the program.)
But the ad hoc nature of the way the systems are used is one of the oddities of running this type of environment.
The Assynt Field Club, a local society for those interested in natural history and ecology, has a collection of material gathered over many years, in the form of physical records, slides, maps, diary entries, logs and many other information records. Many of these records are the work of Pat and Ian Evans, who also received sighting records, photographs and other information from field club members, Assynt residents, visitors and field specialists. Much of information pre-dates the common availability of digital formats. The Field Club is aware of the archival value of all this information, but the task of digitising it all and making it accessible is a huge one, almost certainly beyond the scope of voluntary work to get it all done. The Field Club has long been aware of the possibilities the Assynt Community Digital Archive represents to their archival requirements.
The suggestion was made to seek funding for a pilot exercise, and the North Highland Initiative was approached to see how such an exercise would fit with their priorities. The intent was that the Field Club would run the project, using the facilities of the Assyt Community Digital Archive. NHI was enthusiastic about the proposal, and it was agreed to fund a short project of 22 days as a pilot to determine the value, scope and achievability of such an archiving project. A local resident, Avril Haines, was selected as the archivist, from a very capable and satisfyingly large list of applicants. Avril's training in digital techniques and archiving principles started in January 2014. The project is run under the auspices of NHI, with local oversight by the Field Club representatives, mainly Ian Evans and Andy Summers. Eilidh Todd of NHI has been wonderful regarding the administration and facilitation of the post, and Ian Mitchell very helpful with candidate selection.
At the end of the pilot, a report will be drawn up to lay out the learnings of the project and to determine criteria for future such work, if applicable.
This is a great example a local social group making use and "owning" a section of the Assynt Community Digital Archive, achieving their own ends, but also contributing towards a greater whole. For further information, email Stevan- ku.oc.evalsnitnull@evalsnit
There are some principles for deploying a digital heritage collection which 25 years of IT experience and supporting technical concepts make self-evident. Most of these are documented on this site, such as the idea of using virtualisation to separate out the various services that make up a digital collection, and the separation of the long-term storage of digital artefacts from their interpretation and display. In our case, we use DSpace software along with its robust database (Postgresql) to ensure that our storage warehouse of data meets the requirement of a long term repository. DSpace can, of course, simply be browsed for content, but from time to time, it may be desirable to raise a project to curate and interpret information contained in DSpace. I have always thought the display and interpretation requirement is often a short-term one, in comparison. So to my mind, a content management system like WordPress or Drupal would be perfect for such interpretation, simply dipping into DSpace to select the conformation to display.
The Roy Rosenzweig Centre for History and New Media, though, has developed Omeka, specifically to bridge the gap between appropriately rigorous digital storage and displaying collections of material. Omeka is developed and made available under the General Public Licence, so, as Free and Open Source Software, it provides future certainty to users, and the fact that it is free of cost makes it easy to deploy. While Omeka can be used as the underlying repository as well, this may be best considered as appropriate for small collections. It is clear, though, that the combination of DSpace for long term storage and Omeka for interpretation and deployment is a powerful one, which I hope to be exploring more in the near future.
Omeka is as easy to deploy as the blogging platform, WordPress and similarly uses a standard Linux (operating system), Apache (web server), Mysql (database) and PHP (development language) server. Similar to WordPress.com services, it is possible to use Omeka's own servers at omeka.net to deploy your own site, but most DSpace users will be probably have their own capability to deploy Omeka,
For projects like the Assynt Community Digital Archive, where a decision has been taken to keep the main DSpace repository off the public internet, Omeka offers the possibility of a displaying an informational sub-set of data, particularly useful for public relations or marketing purposes, or to fulfil a possible requirement of funders. Omeka also simplifies links with Zotero via a COinS plugin that generates bibliographic data, making adding bibliographic data to Zotero a click away. We have already discussed the usefulness of Zotero on this site, so deploying an Omeka site which makes it easy for browsers to populate a local Zotero database makes sense.
Omeka has an "Exhibits" plugin that allows one to use information from the collections Omeka contains to add interpretation. It seems to me that this is a particularly useful part of Omeka, and provides helpful separation of data from presentation. The "Exhibits" plugin seems particular useful to adding narrative to DSpace artefacts, but there will inevitably be an intervening step of getting DSpace information into Omeka which may be a clumsy way of managing the data.
There are some limitations or omissions to Omeka from my perspective. One would be that it would be good to have an Open Document Format viewer, allowing the display of uploaded ODF-format files. A workaround is that Omeka can display Google Doc information, but handing over data to third-party commercial entities does not sit comfortably with me. Another is that Omeka may be seen in some circumstances to fulfil the functionality of a fully-fledged storage repository like DSpace, so positioning it in a digital heritage project may be quite tricky (but on further thought, maybe that's a strength of Omeka). From a technical perspective, it would also be good to have a degree of database independence, or at least to offer support for Postgresql as well as MySQL.
Omeka is clearly a strong addition to particular digital humanities project deployments.
ADDENDUM:- Patrick Murray-John, one of the developers of Omeka, has been in contact to point out that the latest version of Omeka has programming interfaces designed to make it easier to exchange data with systems like DSpace. This seems to show an understanding of their target audience, and as the development team clearly found this blog post, an active interest in their project, both of which are positive points for anyone contemplating an Omeka deployment.
A wiki is a brilliant idea, created by Ward Cunningham when he released his WikiWikiWeb, a system that allowed groups of people to create and edit documents and notes easily and with minimal training. To create links, it uses the concept of Wikiwords, and exampe being this - WikiWord - with capitalisation in mid-word. This tells the system to create a new page. Similarly, adding bold, italics, headers, tables and other attributes is easily done with simple mark-up. The result was a web of information that is easy to use, and so powerful that one of the Internet's most popular sites is based on the same principles - Wikipedia. There are interesting cultural connotations to the term wiki, the name originating from an Hawaiian expression meaning "quick". As Cunningham himself noted, "the beauty of Wiki is in the freedom, simplicity, and power it offers." (Source:- http://en.wikipedia.org/wiki/WikiWikiWeb and http://en.wikipedia.org/wiki/Wiki)
Quite soon people began to realise how useful wiki could be to personal information management. Computers have traditionally been good at managing structured information, but coping with our brain dumps is a more difficult challenge. There have been some wonderful attempts to develop software to manage this lack of structure, the best one, in my view, being the short-lived Lotus Agenda. (https://en.wikipedia.org/wiki/Lotus_Agenda) Wikis work well, but can be accused of needing to learn quite a lot before the wiki becomes useful.
Zim wiki (http://zim-wiki.org) resolves many of the issues with web-based wikis regarding having to learn the mark-up language, and is an easy-to-use personal information management tool. It does not have artificial intelligence capabilities such as (and this worked with Lotus Agenda) typing "Meeting with Susan next Wednesday at 9" resulting in a diary entry for the correct day, an addition to a database entry on "Susan" and so on. But it does allow you to add a structure to random ideas, add pictures or other media objects, and keep these notes in a simple way, without the use of complex databases. Zim stores all its information in simple text files, leaving your data always accessible, being easy to back up, and not imposing any limitations.
This is part of the zim wiki I used during my degree:-
I have also used a useful journalling capability. Clicking on a "Today" button in the built-in calendar results in a new page with today's date, and an automatically structured calendar index entry. The power in this lies in the ability to insert images resulting in a straight forward but very usable journalling system.
It's even possible to have multiple Zim notebooks on the go at the same time. I separate out my general notes from my journal entries, and during my degree, my UHI notes were in a separate notebook too.
Zim has a plugin architecture which extend its native capability while staying within the principles which make the program so useful. Mathematicians will enjoy the ability to insert equations, whole musicians can insert music notation using the Lilypond program. Spell checking is done with a plugin, as is version control, if you want it.
When it comes to making use of the notes, the first question is about searching. Zim is good at this. But it is also possible to export a set of notes as HTML (web) files, which can be uploaded to a web server for wider access. The Zim home page notes that it itself is written using Zim, a delightful bit of recursion. But Zim also has an additional trick up its sleeve. Let's say you're at a conference, and you either want a wider group to access some notes. Zim has a built-in web server, which you can start (it doesn't run automatically - that would not be secure) and allow others to access your Zim notes.
Zim has been under development under the management of its creator, Jaap G Karssenberg, for some years, and is currently at version 0.60. Do not be concerned that it is not yet at version 1.0, as it has been fully usable for quite a few years. Zim is published under the General Public Licence, GPL2, so the source will always be available. For those using Linux, if you use Debian or Ubuntu, Zim can be installed from the software repositories directly (apt-get install zim). Other Linux users may download the source, which is written in easily installable python, and run the setup.py script. Windows users can download an installable executable from the link on the downloads page on the zim-wiki.org site.
And one great thing in favour of an information tool like Zim is that it stays within your own control. You can be sure that no-one will index, search, hand over to a third party, or otherwise abuse your information, which is the default assumption if you place your information with a "cloud" internet-based service. Keep your information under your own control in these days when trust in third parties must, by their own admission, be so low.
On a number of occasions during my recently completed degree, lecturers noted good adherence to citation conventions in my work. This I found a little embarrassing, as it was really all down to the tools I used for citation management. I've bumped into various proprietary citation managers over the years, and many of them seemed to me too much a mechanism to lock you in to the use of the tool rather than making use easier and more flexible. Readers of this site will understand that community archiving is very much about making sure that the data under management can always be liberated from the storage silos in which the are kept well into the future, so digital archiving resists the idea of locking one into the use of particular software, especially through the uses of restricted file formats or other restrictions. This is where Free and Open Source software becomes a natural fit with archiving, as there is no benefit to Free Software producers to try to lock the user in with restrictions.
So to find a citation management tool under a Free Software licence was wonderful, and Zotero integrated really well with the other software crucial to my degree, Firefox as a browser and LibreOffice for word processing and office productivity. With Zotero, I could search for the book I wished to cite, finding that many book sellers online provide the underlying citation information on their web sites, such that it was a single click to add the details of the book straight into the Zotero database. An add-in for LibeOffice Writer (I understand one exists for MS Word too) then allowed me to click to add that citation in the format required by my University, and subsequently to set out, again automatically, the table of references. It's also possible to create database entries directly from web pages, important when so much information is web-based these days, and also to create ad hoc entries.
But Zotero then starts becoming something more. It is possible to include pdf files or add other attachments as part of the database, for example, at which point Zotero starts becoming a sort of personal archive in itself. It is also possible to export the citation library in various formats, and to generate reports of the contents. So a Zotero library may ultimately be uploaded into another archive elsewhere, where it's goodness can live on. An example report of all the references I used in my degree can be found here. Amazing to think that it runs to 184 pages.
Coming back to the value of Free Software in this role, during my degree, the University changed the citation tool it made available to students and staff. The migration from the old to the new was clearly a painful one, though that pain bypassed me while I happily continued to use Zotero, but my experience of Free Software suggests that if another Free tool superseded Zotero, there is every likelihood of migration tools being provided, as the formats under which the data is held are not restricted. In other words, your data remains safe.