Blog

Blog

16 May, 2023
Despite the rapid growth in the volume of born-digital objects, born-analogue content is still a crucial part of many collections. Digitisation is a fundamental process that all digitally preserved born-analogue objects go through and, as many can attest, it’s not without its pitfalls. The digitisation process can be a significant barrier to preserving born-analogue content, and has greater implications on maintaining the accuracy and authenticity of digital objects. Careful Planning Can Help Negate The Cost Of Digitisation The process of digitisation can be a lengthy and expensive exercise. Digitising collections requires significant resources, including specialised equipment, trained personnel, and time. In some cases, digitisation can take years to complete, making it an impractical undertaking for many organisations. Introducing these additional tasks into the preservation workflow when compared to born-digital content significantly increases the time invested into each artefact, reducing the throughput the archivist can achieve between their pre-accession collections and the digital archive itself. Also, not only does the archivist need to select scanning apparatus that meets their quality requirements, but they also have to be mindful of the ongoing challenges associated with retaining their digitised records. Higher quality scans could consume more storage space, for example. The choice of formats and characteristics of the digitisation systems can also have drastic effects on the costs of a digitisation project. Specifically selecting a scanning system that produces outputs in a format suitable for preservation can negate the need for additional format migration, reducing processing and storage requirements. Careful planning like this can help reduce the cost, duration, complexity, and environmental impact of your digitisation project. They'll Never Be Perfect
09 May, 2023
Curate is the only Digital Curation platform that gives you total control over your pre-transfer collections and lets you customise how the system responds to duplicate uploads. Simply click open the advanced options menu using the ⋮ icon in the transfer window to select your preferences.
04 May, 2023
Open-source software has always been a key resource in digital preservation efforts. The flexibility, public accessibility (allowing users to modify and/or audit it as needed) and often low-cost of open-source software make it an appealing choice for digital archivists not only for its resource efficiency, but because it can also directly increase the chance a digital collection has to survive for extended periods of time. Open-source software can provide more control over the digital preservation process, enabling users to make more specific choices in their selection of tools to better preserve their collections. Let’s explore how open-source principles guide and are reflected in digital preservation guidelines and examine the benefits and drawbacks of using open-source software in your digital preservation system. Open Principles Parallels between open-source principles and concepts that play an important role in guiding digital preservation practice are easy to identify. Providing a framework for collaboration, transparency, and community involvement in both cases increases the efficacy and veracity of their outputs. These principles align well with the goals of digital preservation, which seek to ensure the long-term availability and accessibility of digital information, by encouraging development that can be used and audited by anyone. Open-source principles emphasise transparency, meaning that the source code for the software is publicly accessible and can be scrutinised, modified, and improved by anyone. In a digital preservation context, this transparency enables organisations to have a far deeper understanding of the processes they subject their data to, more control over their digital preservation strategies, to trust their systems more, and to make necessary decisions as technology evolves. This reciprocally improves the custodian organisations understanding of their collections, empowering them to describe and treat future content even more effectively. The collaborative nature of open-source is also a natural fit for digital preservation: a community of users who share knowledge, expertise, and resources to improve and enhance tools have the capacity to maintain stable software for longer than ephemeral private vendors. Digital preservation also naturally requires multi-disciplinary efforts, not only in the development of guidelines, strategies, and tools, but also in the physical implementation, ongoing usage and maintenance of these systems and practices; frequently requiring the involvement of IT professionals, librarians, archivists, and other highly specialised stakeholders. Open-source software offers an appealing alternative to the typical approach of proprietary solutions, which often attempt to aggregate many functions into monolithic services to reduce complexity, but in turn can reduce or eliminate the control an archivist has over their collections. In some cases, this can be to the extent that the archivist cannot be reasonably said to be taking true responsibility as the incumbent custodian of their content. They have de facto ceded the decision-making process of how their collections are treated to the vendor. This is not only counter-intuitive to the principles most digital-preservation authorities espouse: assumption of accountability, custodianship, and responsibility for the preservation of their collections on the part of the archivist, it is also dangerous for the collections. The authority on how, when, and why any actions are taken on content should always fundamentally be the archivist in charge of the collections. Any bespoke processes or protected methodologies implemented by proprietary software naturally reduce the precision and quality of the custodians understanding of their own collections. Regardless of their documentation or escrow arrangements, the users will never be able to understand a proprietary solution as closely at the lowest levels as an open one. Open Software, Open Risks
20 Apr, 2023
Emulation is the process of encapsulating the functionality of one system to make it replicable on another system, and is often recommended as one of several viable strategies a software vendor, end user-organisation or archivist could implement to help preserve their artefacts. Emulation as a digital preservation strategy is not without its pitfalls, however: complexity, resources and dispersal of information are all natural challenges you might encounter when implementing an emulation-based digital preservation strategy. Fortunately, other digital preservation strategies, like normalisation, can naturally counter the disbenefits of emulation and synergistically form a more comprehensive digital preservation strategy. Emulation in digital preservation is a complex task, and the nature of it means the process of developing a robust emulation of a given object often presents its own unique challenges, but the concept is summarised concisely by the Digital Preservation Coalition: "Emulation offers an alternative solution to migration that allows archives to preserve and deliver access to users directly from original files. This technique attempts to preserve the original behaviours and the look and feel of applications, as well as informational content. An emulator, as the name implies, is a programme that runs on a current computer architecture but provides the same facilities and behaviour as an earlier one." As noted by the DPC, emulation can be useful for preserving digital content that requires a highly specific software environment. For example, if a digital archive contains software written for an older operating system, emulation can be used to recreate that environment on a modern system. This allows the software to run as it was intended, even if the original software is no longer available or supported. The Perfect Solution?
05 Apr, 2023
Building digital archives is increasingly becoming an essential activity for organisations as they seek to preserve the value of and provide access to their most valuable digital information. As business critical data, whether that's historical artefacts or active files, increasingly becomes born-digital or is digitised, interest in conformance to standards that provide best practice for the ongoing protection and access of valuable assets has grown significantly. Building a digital archive that meets international standards is a highly complex process, t his is compounded by the extreme lack of easily-digestible resources that outline the basic path to attaining compliance for specialised standards like those related to digital archives.
30 Mar, 2023
The Internet Archive is a non-profit organisation founded in 1996 with the mission of preserving and providing universal access to all human knowledge. With a collection of over 1.4 million books, historical documents, and cached versions of websites, the Archive is a digital repository of immense value. Despite its admirable goal of preserving collective knowledge, the Archive has recently found itself embroiled in a controversy surrounding its practices. The crux of the controversy is its Emergency Library programme, which allowed unlimited borrowing of eBooks during the COVID-19 pandemic. The programme, which was initially planned to last through to June 30, 2020, was ended earlier than expected due to legal pressure from publishers. Legal action ( read the full judgement here ) has recently concluded that The Archive was not acting in good-faith with regards to copyright law, and it was not reasonable under fair use to distribute unlimited digital copies of a limited number of distributor-owned physical books. Instead, the court considered the digital copies derivative works and therefore incompatible with the original copyright holders rights. How this specifically affects the Archives services going forwards is currently unclear but ,since the scope of the criticism against The Archive also extends to their general digital-library practice, it's fair to assume they may be forced to rescind a number of their records and absolutely will consider their rights-management policies more carefully in the future, despite their protestations at the outcome of the legal action ( check out their full response here ) . NB: With any subject as contentious as legal action, misinformation is an even greater risk than usual. We would urge you to consult as many official sources as possible for trustable information about the ongoing controversy. The implications of the debate are extremely relevant to the construction of robust Digital Archives: it's easy to assign the highest risk value to, for lack of a better word, impressive and obvious dangers your content might face over long periods. Risks like natural disaster or dramatic system failures are relatively easy to plan for and, realistically, are actually typically well planned-for eventualities. This specific case neatly distills the peril of a significantly less glamorous, and oft under-addressed, risk that has an even greater potential to seriously disrupt or affect your ability to protect your collections: rights management. Digital Archive best-practice unilaterally recommend rights-management strategies that are designed to circumvent scenarios like this one and, true, you're unlikely to find your own collections at the centre of a rights debate of this scale, but it's important nonetheless to recognise the risk improper rights-management can pose to your Digital Archives. A Pressing Issue
Navigating Uncharted Digital Waters
16 Mar, 2023
Navigating Uncharted Digital Waters
10 Mar, 2023
Integrity Checking in Digital Preservation: The Essential Basics Guide 🔍
23 Feb, 2023
The Problem with Traditional Appraisal Guidance Without effective appraisal, digital preservation efforts can quickly become overwhelming and unsustainable. Appraisal, selection and arrangement often dominates the time a custodian of a given collection has available to build their archives.
20 Feb, 2023
We're excited to be able to announce an exciting new feature for Curate: custom tags. Custom tags offer an entirely new way to organise, sort, search and understand your files. Along with the capability to add simple ISAD(G) or Dublin Core-based descriptive metadata templates to your objects, you can now add your own completely custom values in the form of colourful tags. Tags allow you to break down barriers between your separate files and build a better understanding of the content in your collections. Adding tags to an object enables several new powerful organisational functions for you to explore when interacting with your collections: Information at a glance Your tags appear front and centre of the file-display, intuitively telling you about the content you're viewing at a quick glance.
More posts
Sign-up now.
Share by: