Mittwoch, 27. Januar 2021

Impossible - or how I learned to read data storage media at the speed of light and what it's good for

When I receive data carriers from an inheritance, I want to get a quick overview of what is on the floppy disk, the CDROM, the USB stick or the hard disk drive so that I can look at the interesting things first.

But I only know what is there when I read the media, right? A typical chicken and egg problem.
I discovered the crucial clue to the solution in a 2014 talk by Simon Garfinkel "Digital Forensics Innovation: Searching A Terabyte of Data in 10 minutes" (

What is Random Sampling?

Random sampling is nothing more than looking at only every n-th part of a total set and inferring the big picture.

To find out what is on a medium, it would be sufficient to look at random blocks and determine for them, based on their byte structure, whether they fall into the categories "empty", "random", "text", "video" or "undef".

Exactly this approach is implemented in the Perl module File::FormatIdentification::RandomSampling, which can be found on CPAN under

The category "empty" is dominated by sequences of zero bytes, in the category "random" the byte values are almost equally distributed, in the category "text" values for the characters "a-z" from the ASCII character set appear frequently, "video" contains frequent byte sequences resulting from the basic structure of MPEG. And under "undef" everything else is subsumed.


The above Perl module contains the program The following simple call:

perl -I lib bin/ --percent=0.000001 --image=/dev/mapper/laptop--vg-home

provides the following output:

Scanning Image /dev/mapper/laptop--vg-home with size 728982618112, checking 1423 sectors
scanning [...]   
Estimate, that the image '/dev/mapper/laptop--vg-home'
has percent of following data types:
    44.6% random/encrypted/compressed
    35.6% undef
    11.0% empty
     5.4% video/audio
     3.5% text

The complete output is even more extensive. It is important to note that the examined partition was 668GB in size and was scanned in just 15s.


Importantly, the output provides only a rough estimate of what might be on the media. The choice of the sample size (here: via the --percentage parameter) determines the informative value of the estimate, as well as the duration until a result can be delivered.

More ideas

In the above module, I have implemented an experimental output of the MIME-Types potentially present on the media. This is not very stable yet and needs more work, but it can help to estimate even better whether the files on a disk are interesting enough to prioritize it. Here is an example output:

The next mimetype estimation is experimental and needs further work:
    87.9% unknown
     3.5% application/pdf
     1.1% video/quicktime
     0.8% image/gif
     0.8% text/java
     0.7% application/msword
     0.6% text/markdown
     0.6% application/vnd.openxmlformats-officedocument.wordprocessingml.document
     0.6% application/xml
     0.4% application/msaccess
     0.4% application/navimap
     0.4% application/rtf
     0.3% image/png
     0.2% application/arj
     0.1% application/
     0.1% text/html

The approach is to determine the MIME-Type of the files for a test corpus using other tools, determine typical bytegram values and pass the whole thing to a decision tree learner. If you are interested, you are welcome to contribute to the module. 

Happy scanning!

Montag, 10. August 2020

It is nonsense to consider significant properties only at file level

As it looks, most archives raise significant properties at the file level (by the way, they often mean technical properties, which is not the same. But this is a topic for another blog post). But this is insufficient and I will give two examples.

Example 1 - Retro-digitised material

If monographs are scanned, as we do in-house, in order to preserve the originals and make them accessible to users, images are created.If you look at these image files, you can determine the following significant characteristics

  • readable
  • accessible for OCR analysis
  • reproducible
  • maybe even true to color


These properties can then be used to define technical parameters that can be found in certain requirement profiles and can lead, for example, to the recommendation of the TIFF file format.

In the above consideration, the list of the significant property "the order of the scans should correspond to the original" (pagination) is missing. This property could be implemented by combining all scan pages into one file format, e.g. as BigTIFF or PDF/A. However, there may be good reasons not to include all pages in one file. What next? The remaining option is to add a file describing the structure of the digitized material in addition to the TIFF files. This can be a METS XML file, for example. METS is a good choice because it was created for this very purpose. Hmmm, is METS not a metadata format? And doesn't metadata belong outside of the payload? And isn't METS used by several archive information systems to map the AIPs? So can I not pack the structuring data into it?


It is true, METS is a metadata format. And it is true that METS is often used to describe container structures in SIPs or AIPs. But we have to distinguish between metadata describing the IE (i.e. the payload) and metadata inherently belonging to the payload. This is not easy, but here the significant properties help us: If the METS is used, as in our example, to represent the significant property "pagination", then the METS is part of the IE, otherwise it is not.

Now you might be tempted to get sloppy and just put the "pagination" into the METS of the AIP. Is that a good idea? No. Because IE should be kept available and usable. The AIP should only contain the metadata necessary to ensure availability. But when a user later accesses the payload via DIP, he should have everything together, i.e.: an intellectual unit as it was actually intended. This is the principle of independence.

I admit that sounds abstract and difficult. But let us try an analogy. If I have loose pages where the order is important, then the order is important, whether the page is archived or not. For example, I tie them to a book or use other techniques. This is my intellectual unit that I want to archive. I put the whole thing in a box and write on it what is in it and what happened to the box or the content during archiving. This is then my AIP. If I want to hand over the contents of this box to someone later, they don't necessarily have to be interested in what happened to the box, they can take the contents and work with them and know exactly in which order the pages follow each other.

Example 2 - Web page

I would like to present a second example to illustrate another aspect. Let us assume that we are to archive a very specific web page, which for the sake of simplicity consists of an HTML document, CSV files and graphic files. If you look at the web page, there is always a link in the text between one of the CSV files and one graphic file. The assignment could be the visualization of an experiment. It is only important to the department that the values, the textual content and the assignment to the graphic are not lost. Together with the department we determined the significant properties and after a lot of effort we transferred the website (IE) into the long-term archive. After some time we found out that the graphic files were subject to format obsolescence and had to be migrated to a new format. We decide on the new image archive format PNG/A and migrate the old files.

But is this sufficient? No. The HTML document still contains the file name of the old format. Should we change the file name or leave it as it is? The principle of least surprise speaks for "change". But if we change the file names during the migration, we impossibly have to change the file names in the HTML document as well.

Let's summarize

  1. Significant properties belong at the level of IE recorded. They are not file dependent.
  2. Metadata, which is essential to represent the relationship of objects within an IE, is mandatory part of an IE
  3. Format migrations can result in changes to other parts of the IE, even if they are not migrated themselves
  4. Metadata and data that are inside an IE must never refer to data or metadata outside
  5. Metadata outside of an IE, however, may already reference metadata and data of an IE.  

Whew, that was a lot of thinking, but I hope it was worth thinking about it.

Mittwoch, 22. Juli 2020

Format recognition, new analysis options?

Previous work

In an older article (see I have already done an analysis of PRONOM signatures. Since today the module for this exists on CPAN, see for details.

In addition to the statistics on PRONOM signatures, the Perl package comes with two more helper scripts that can make the work of a long-term archivist easier.

Format identification

On the one hand, we have the functionality of classic format recognition. The script delivers all hits. In the output the quality of the RegEx is indicated. This does not say how well the PRONOM signature matches the file, but how specifically it is created.

Here is an example output for a TIFF file, which was wrongly recognized as GeoTIFF by Droid:

perl -I lib bin/ -s DROID_SignatureFile_V96.xml -b /tmp/00000007.tif
/tmp/00000007.tif identified as Tagged Image File Format with PUID fmt/353 (regex quality 1)
/tmp/00000007.tif identified as Geographic Tagged Image File Format (GeoTIFF) with PUID fmt/155 (regex quality 2)

Colorized output of possible signature hits in the hexeditor wxHexEditor

Under Linux you can use the editor wxHexEditor to analyze files. It allows you to create tag-files, in which you can define sections that are marked with colors and annotated.

The script pronom2wxhexeditor creates such a file. In the following you can see the call and a screenshot.

perl -I lib bin/ -s DROID_SignatureFile_V96.xml -b /tmp/00000007.tif

What next?

Well, it's up to us as a community to use the existing tools and use their possibilities to improve our daily work. Anyone who has suggestions for improvement or ideas is welcome to share them with us.

I would be especially happy if servant spirits would take the pronoun statistics to their chest and help improve the pronoun signatures.

It makes sense to start with the orphaned signatures and to check multiple used signatures again.

Montag, 13. Juli 2020

Why it is a stupid idea to consider CSV as a valid long-term preservation file format

Take CSV!

It's so nice and quick and easy to say. Take CSV!

For simple cases that may be true. CSV files look so simple, so innocent, so sweet. Yet by their very nature they are insidious, vicious, and resemble a bloody walk into the deepest dungeons of classic role-players.

Let us begin our journey.

Innocent simplicity

You take a separator, e.g. the comma, use it to separate your values. Pour both into readable form. Done.

Okay. We need a second separator to show us the next line. But then, done! It's a CSV.

Hmm. There was something. Line separator. Now, is that line feed, carriage return or carriage return and line feed? It depends. For example, what operating system you're running.

The monster is growing

It is not a bad idea to separate values of a list by commas. Especially for Americans, this feels quite natural.

In other parts of the world, the decimal places of fractional numbers are separated by commas. Good, then we'll give the spreadsheets the opportunity to define the separator freely. Problem solved.

Well, not quite. It could be in other contexts that somehow the separator could appear in the individual values of a list. Good, then we'll introduce quoting. We define a character that allows us to recognize whether a separator is a separator or just a text component of a list value. Apostrophes would fit. That was easy, wasn't it?

Short break

So, to sum up. CSV files are easy. You need a separator, which can be a comma or anything else. We have a second separator that separates the lines. Usually there are three variations. We need quoting to see that a value cannot be confused with a separator.

Yeah, it may have been a little more complex than it looked at first. But what is there to make it worse?

Little toothy pegs!

Hmm, what if I want to store a text like this as a value after the raw value 1:

And he said "Oh, no!"

In the text, we have a comma, which would be protected by quoting, But we also have quotation marks, which we need for our quoting. No problem, then we double the quotation mark at that point to indicate that the text is not finished. So in the CSV it looks like this now:

1, "And he said ""Oh, no!""

I got it.

But, wait, what happens if my text consists of a single quotation mark?


You're lucky. It seems to be working.

Wait, so what if I have a lot of quotation marks? As in

This is translated to
1, """"""""""""""

It works, too.

The problem is in the details

Now, a nasty little devil might get the idea to construct a text as value that contains line breaks, for example this one:

Evil Text

That would then:

1, "Evil text

Oops! If I now stubbornly read this in line by line, I would have read strange lines.
Good thing there is real software out there that reads and parses CSV files cleanly from the beginning. Not that anyone here still uses 'grep' and co.

The Abyss

Have we actually talked about character encoding yet? ASCII, Latin-1, UTF32? UTF8? With or without byte-order mark? No. Let's turn back. We still have a chance.

Later, at the pub.

I admit it was a terrible trip. Now, over a cold beer, we can laugh about it. But our hearts were already in our mouth. We had no idea what to expect.

If only there had been a sign that said what character encoding, what line end encoding, what separators for lines and columns we could expect, yes, then we would have been able to understand CSV and we would have been spared the horror. But the horror comes from the darkness, from the premonitions of the unknown.

Therefore, be warned!

Don't use CSV, it could get you!

Dienstag, 18. Februar 2020

format zoo for videos - a bad idea in digital preservation


In an article on, reasons are given not to apply the existing normalization of born-digital videos to FFV1, but to convert to lossy codecs instead. Elsewhere I even heard that normalization is not applied at all because it requires so many resources.

Why is normalization a good idea after all?

Normalization ensures that a manageable set of file formats remains from the huge format zoo, which can be handled well in the future. Normalization therefore reduces the organizational complexity above all.

And why should you use Matroska/FFV1?

FFV1 has the disadvantage of imposing higher storage requirements on its users, but in my opinion, the following points outweigh it:

  • FFV1 is much less complex than h264 (read "reduced technical complexity")
  • FFV1 (like other lossless codecs) allows automatic format migration (see also RAWcooked) — this reduces organizational complexity
  • FFV1 is freely available, widely used, well documented and standardized

The point that FFV1 is also more resistant to bit rot is just the icing on the cake.


Incidentally, personnel cost is the cost driver in digital preservation, as opposed to the pure storage cost.

Hence, the ultimate question is: how expensive is storage capacity in relation to the reduced technical and organizational complexity?

Mittwoch, 29. Mai 2019

Legacy media

This is the reason why you have to pay special attention to legacy digital media. Defective tracks of a floppy disk, special hardware (and knowledge) is necessary here.

Montag, 1. April 2019

Vorsicht vor Bitfischchen - Bestandserhaltung im digitalen Zeitalter

Schädlingsbekämpfung ist ein immerwährendes Problem in Bibliotheken und Archiven. Silberfischchen, Papierfischchen und andere Übeltäter laben sich an den Beständen und richten dabei beträchtliche Schäden an.

Da die Schädlingsbekämpfung nicht als explizite Aufgabe im OAIS-Referenzmodell aufgeführt ist, haben einige digitale Langzeitarchive hier bisher deutliche Defizite. Inzwischen spüren aber auch diese Einrichtungen immer deutlicher, dass die Schädlingsbekämpfung nicht vernachlässigt werden darf.

Angelockt von umfangreichen digitalen Beständen nisten sich Bitfischchen und Käfer (in der Fachsprache "Bugs" genannt) in Kabelhaufen ein und vermehren sich dort ungestört. Das Nahrungsangebot durch den reichlich vorhandenen Kabelsalat ist gut, und so wachsen die Populationen schnell an. Reste von Junk sowie Binärmüll-Krümel verschärfen das Problem zusätzlich.

Nicht nur die Anzahl der Fischchen, sondern auch ihre lange Lebensdauer ist ein Problem. Viele von Ihnen werden acht bis zehn Jahre alt, Microfichechen sogar noch deutlich älter.

Im moderigen Milieu vieler digitaler Archive fühlen sich auch Magnetbandwürmer wohl, die sich vor allem an den Daten auf WORM-Tapes laben. Daten, die nicht von den kleinen Plagegeistern zerstört werden, zerfallen in der fauligen Umgebung durch den Bitrot zu unlesbarem Datenkompost, der die Datenleitungen verstopft und so die Verarbeitung stört.

Eine gute Seite hat die neue Plage allerdings: findige Informatiker haben herausgefunden, dass Bitfischchen hervorragend zur Herstellung von Bitfett geeignet sind. Sie nutzen es, um Leitungsverbindungen zu schmieren und so die Reibung bei der Datenübertragung zu reduzieren, was wiederum positiv auf den Durchsatz auswirkt.