error correction

DVCAM transfers, error correction coding & misaligned machines

This article is inspired by a collection of DVCAM tapes sent in by London-based cultural heritage organisation Sweet Patootee. Below we will explore several issues that arise from the transfer of DVCAM tapes, one of the many Digital Video formats that emerged in the mid-1990s. A second article will follow soon which focuses on the content of the Sweet Patootee archive, which is a fascinating collection of video-taped oral histories of 1 World War veterans from the Caribbean.

The main issue we want to explore below is the role error correction coding performs both in the composition of the digital video signal and during the preservation playback. We want to highlight this issue because it is often assumed that DVCAM, which first appeared on the market in 1996, is a fairly robust format.

The work we have done to transfer tapes to digital files indicates that error correction coding is working overdrive to ensure we can see and hear these recordings. The implication is that DVCAM collections, and wider DV-based archives, should really be a preservation priority for institutions, organisations and individuals.

Before we examine this in detail, let’s learn a bit about the technical aspects of error correction coding.

Error error error

Error correction coding is a staple part of audio and audio-visual digital media. It is of great important in the digital world of today where the higher volume of transmitted signals require greater degrees of compression, and therefore sophisticated error correction schemes, as this article argues.

Error correction works through a process of prediction and calculation known as interpolation or concealment. It takes an estimation of the original recorded signal in order to re-construct parts of the data that have been corrupted. Corruption can occur due either to wear and tear, or insufficiencies in the original recorded signal.

Yet as Hugh Robjohns explains in the article ‘All About Digital Audio’ from 1998:

 ‘With any error protection system, if too many erroneous bits occur in the same sample, there is a risk of the error detection system failing, and in practice, most media failures (such as dropouts on tape or dirt on a CD), will result in a large chunk of data being lost, not just the odd data bit here and there. So a technique called interleaving is used to scatter data around the medium in such a way that if a large section is lost or damaged, when the data is reordered many smaller, manageable data losses are formed, which the detection and correction systems can hopefully deal with.’

There are many different types of error correction, and ‘like CD-ROMs, DV uses Reed-Solomon (RS) error detection and correction coding. RS can correct localised errors, but seldom can reconstruct data damaged by a dropout of significant size (burst error),’ explains this wonderfully detailed article about DV video formats archived on web archive.

The difference correction makes

Digital technology’s error correction is one of the key things that differentiate it from their analogue counterparts. As the IASA‘s Guidelines on the Production and Preservation of Digital Audio Objects (2009) explains:

‘Unlike copying analogue sound recordings, which results in inevitable loss of quality due to generational loss, different copying processes for digital recordings can have results ranging from degraded copies due to re-sampling or standards conversion, to identical “clones” which can be considered even better (due to error correction) than the original.’ (65)

To think that digital copies can, at times, exceed the quality of the original digital recording is both an astonishing and paradoxical proposition. After all we are talking about a recording that improves at the perceptual level, despite being compositionally damaged. It is important to remember that error correction coding cannot work miracles, and there are limits to what it can do.

Dietrich Schüller and Albrecht Häfner argue in the International Association of Sound and Audiovisual Archives’s (IASA) Handling and Storage of Audio and Video Carriers (2014) that ‘a perfect, almost error free recording leaves more correction capacity to compensate for handling and ageing effects and, therefore, enhances the life expectancy.’ If a recording is made however ‘with a high error rate, then there is little capacity left to compensate for further errors’ (28-29).

The bizarre thing about error-correction coding then is the appearance of clarity it can create. And if there are no other recordings to compare with the transferred file, it is really hard to know what the recorded signal is supposed to look and sound like were its errors not being corrected.

When we watch the successfully migrated, error corrected file post-transfer, it matters little whether the original was damaged. If a clear signal is transmitted with high levels of error correction, the errors will not be transferred, only the clear image and sound.

Contrast this with a damaged analogue tape it would be clearly discernible on playback. The plus point of analogue tape is they do degrade gracefully: it is possible to play back an analogue tape recording with real physical deterioration and still get surprisingly good results.

Digital challenges

The big challenge working with any digital recordings on magnetic tape is to know when a tape is in poor condition prior to playback. Often tape will look fine and, because of error correction, will sound fine too until it stops working entirely.

How then did we know that the Sweet Patootee tapes were experiencing difficulties?

Professional DV machines such as our DVC PRO have a warning function that flashes when the error-correction coding is working at heightened levels. With our first attempt to play back the tapes we noticed that regular sections on most of the tapes could not be fixed by error correction.

The ingest software we use is designed to automatically retry sections of the tape with higher levels of data corruption until a signal can be retrieved. Imagine a process where a tape automatically goes through a playing-rewinding loop until the signal can be read. We were able to play back the tapes eventually, but the high level of error correction was concerning.

As this diagram makes clear, around 25% of the recorded signal in DVCAM is composed of subcode data, error detection and error correction.

DVCAM & Mis-alignment

It is not just the over-active error correction on DVCAMs that should send the alarm bells ringing.

Alan Griffiths from Bristol Broadcast Engineering, a trained SONY engineer with over 40 years experience working in the television industry, told us that early DVCAM machines pose particular preservation challenges. The main problem here is that the ‘mechanisms are completely different’ for earlier DVCAM machines which means that there is ‘no guarantee’ they will play back effectively on later models.

Recordings made on early DVCAM machines exhibit back tensions problems and tracking issues. This increases the likelihood of DV dropout on playback because a loss of information was recorded onto the original tape. The IASA confirm that ‘misalignment of recording equipment leads to recording imperfections, which can take manifold form. While many of them are not or hardly correctable, some of them can objectively be detected and compensated for.’

One possible solution to this problem, as with DAT tapes, is to ‘misalign’ the replay digital video tape recorder to match the misaligned recordings. However ‘adjustment of magnetic digital replay equipment to match misaligned recordings requires high levels of engineering expertise and equipment’ (2009; 72), and must therefore not be ‘tried at home,’ so to speak.

Our experience with the Sweet Patootee tapes indicates that DVCAM tapes are a more fragile format than is commonly thought, particularly if your DVCAM collection was recorded on early machines. If you have a large collection of DVCAM tapes we strongly recommend that you begin to assess the contents and make plans to transfer them to digital files. As always, do get in touch if you need any advice to develop your plans for migration and preservation.

 

Posted by debra in digitisation expertise, video tape, 0 comments

Transferring Digital Audio Tapes (DATs) to digital audio files

At Greatbear, we carefully restore and transfer to digital file all types of content recorded to Digital Audio Tape (DAT), and can support all sample rate and bit depth variations.

This post focuses on some of the problems that can arise with the transfer of DATs.

An immature recording method (digital) on a mature recording format (magnetic tape), the audio digital recording revolution was never going to get it right first time (although DATs were not of course the first digital recordings made on tape).

Indeed, at a meeting of audio archivists held in 1995, there was a consensus even then that DAT was not, and would never be, a reliable archival medium. One participant stated: ‘we have tapes from 1949 that sound wonderful,’ and ‘we have tapes from 1989 that are shot to hell.’ And that was nearly twenty years ago! What chances do the tapes have now?

A little DAT history

Before we explore that, let’s have a little DAT history.

SONY introduced Digital Audio Tapes (DATs) in 1987. At roughly half the size of an analogue cassette tape, DAT has the ability to record at higher, equal or lower sampling rates than a CD (48, 44.1 or 32 kHz sampling rate respectively) at 16 bit quantization.

Although popular in Japan, DATs were never widely adopted by the majority of consumer market because they were more expensive than their analogue counterparts. They were however embraced in professional recording contexts, and in particular for recording live sound.

It was recording industry paranoia, particularly in the US, that really sealed the fate of the format. With its threatening promise of perfect replication, DAT tapes were subject to an unsuccessful lobbying campaign by the Recording Industry Association of America (RIAA). RIAA saw DATs as the ultimate attack on copyright law and pressed to introduce the Digital Audio Recorder Copycode Act of 1987.

This law recommended that each DAT machine had a ‘copycode’ chip installed that could detect whether prerecorded copyrighted music was being replicated. The method employed a notch filter that would subtly distort the quality of the copied recording, thus sabotaging acts of piracy tacitly enabled by the DAT medium. The law was however not passed, and compromises were made, although the US Audio Home Recording Act of 1992 imposed taxes on DAT machines and blank media.

How did they do ‘dat?

Like video tape recorders, DAT tapes use a rotating head and helical scan method to record data. The helical scan can, however, pose real problems for the preservation transfers of DAT tapes because it makes it difficult to splice the tape together if it becomes sticky and snaps during the tape wind. With analogue audiotape, which records information longitudinally, it is far more possible to splice the tape together and continue the transfer without risking irrevocable information loss.

Another problem posed by the helical scan method is that such tapes are more vulnerable to tape pack and backing deformation, as the CLIR guide explain:

‘Tracks are recorded diagonally on a helical scan tape at small scan angles. When the dimensions of the backing change disproportionately, the track angle will change for a helical scan recording. The scan angle for the record/playback head is fixed. If the angle that the recorded tracks make to the edge of the tape do not correspond with the scan angle of the head, mistracking and information loss can occur.’

When error correction can’t correct anymore

Most people will be familiar with the sound of digital audio dropouts even if they don’t know the science behind them. You will know them most probably as those horrible clicking noises produced when the error correction technology on CDs stops working. The clicks indicate that the ‘threshold of intelligibility’ for digital data has been breached and, as theorist Jonathan Sterne reminds us, ‘once their decay becomes palpable, the file is rendered entirely unreadable.’

Our SONY PCM 7030 professional DAT machine, pictured opposite, has a ‘playback condition’ light that flashes if an error is present. On sections of the tape where quality is really bad the ‘mute’ light can flash to indicate that the error correction technology can’t fix the problem. In such cases drop outs are very audible. Most DAT machines did not have such a facility however, and you only knew there was a problem when you heard the glitchy-clickety-crackle during playback when, of course, it was too late do anything about it.

The bad news for people with large, yet to be migrated DAT archives is that the format is ‘particularly susceptible to dropout. Digital audio dropout is caused by a non-uniform magnetic surface, or a malfunctioning tape deck. However, because the magnetically recorded information is in binary code, it results in a momentary loss of data and can produce a loud transient click or worse, muted audio, if the error correction scheme in the playback equipment cannot correct the error,’ the wonderfully informative A/V Artifact Atlas explains.

Given the high density nature of digital recordings on narrow magnetic tape, even the smallest speck of dust can cause digital audio dropouts. Such errors can be very difficult to eliminate. Cleaning playback heads and re-transferring is an option, but if the dropout was recorded at the source or the surface of tape is damaged, then the only way to treat irregularities is through applying audio restoration technologies, which may present a problem if you are concerned with maintaining the authenticity of the original recording.

https://cdn.thegreatbear.co.uk/wp-content/uploads/2014/10/dat-playback-conditions.mp3?_=1

Listen to this example of what a faulty DAT sounds like

Play back problems and mouldy DATs

Mould growth on the surface of DAT tape

A big problem with DAT transfers is actually being able to play back the tapes, or what is known in the business as ‘DAT compatibility.’ In an ideal world, to get the most perfect transfer you would play back a tape on the same machine that it was originally recorded on. The chances of doing this are of course pretty slim. While you can play your average audio cassette tape on pretty much any tape machine, the same cannot be said for DAT tapes. Often recordings were made on misaligned machines. The only solution for playback is, Richard Hess suggests, to mis-adjust a working machine to match the alignment of the recording on the tape.

As with any archival collection, if it is not stored in appropriate conditions then mould growth can develop. As mentioned above, DAT tapes are roughly half the size of the common audiocassette and the tape is thin and narrow. This makes them difficult to clean because they are mechanically fragile. Adapting a machine specifically for the purposes of cleaning, as we have done with our Studer machine, would be the most ideal solution. There is, however, not a massive amount of research and information about restoring mouldy DATs available online even though we are seeing more and more DAT tapes exhibiting this problem.

As with much of the work we do, the recommendation is to migrate your collections to digital files as soon as possible. But often it is a matter of priorities and budgets. From a technical point of view, DATs are a particularly vulnerable format. Machine obsolescence means that compared to their analogue counterparts, professional DAT machines will be increasingly hard to service in the long term. As detailed above, glitchy dropouts are almost inevitable given the sensitivity and all or nothing quality of digital data recorded on magnetic tape.

It seems fair to say that despite being meant to supersede analogue formats, DATs are far more likely to drop out of recorded sound history in a clinical and abrupt manner.

They therefore should be a high priority when decisions are made about which formats in your collection should be migrated to digital files immediately, over and above those that can wait just a little bit longer.

Posted by debra in audio tape, digitisation expertise, 9 comments

Early digital tape recordings on PCM/ U-matic and Betamax video tape

We are now used to living in a born-digital environment, but the transition from analogue to digital technologies did not happen overnight. In the late 1970s, early digital audio recordings were made possible by a hybrid analogue/digital system. It was composed by the humble transport and recording mechanisms of the video tape machine, and a not so humble PCM (pulse-code-modulation) digital processor. Together they created the first two-channel stereo digital recording system.

The first professional use digital processing machine, made by SONY, was the PCM 1600. It was introduced in 1978 and used a U-matic tape machine. Later models, the PCM 1610/ 1630, acted as the first standard for mastering audio CDs in the 1980s. SONY employee Toshitada Doi, whose impressive CV includes the development of the PCM adaptor, the Compact Disc and the CIRC error correction system, visited recording studios around the world in an effort to facilitate the professional adoption of PCM digital technologies. He was not however welcomed with open arms, as the SONY corp. website explains:

'Studio engineers were opposed to digital technology. They criticized digital technology on the grounds that it was more expensive than analogue technology and that it did not sound as soft or musical. Some people in the recording industry actually formed a group called MAD (Musicians Against Digital), and they declared their position to the Audio Engineering Society (AES).'

Several consumer/ semi-professional models were marketed by SONY in the 70s and 80s, starting with the PCM-1 (1977). In a retro-review of the PCM-F10 (1981), Dr Frederick J. Bashour explains that

'older model VCRs often worked better than newer ones since the digital signal, as seen by the VCR, was a monochrome pattern of bars and dots; the presence of modern colour tweaking and image compensation circuits often reduced the recording system's reliability and, if possible, were turned off.'

Why did the evolution of an emerging digital technology stand on the shoulders of what had, by 1981, become a relatively mature analogue technology? It all comes down to the issue of bandwidth. A high quality PCM audio recording required 1-1.5 MHz bandwidth, which is far greater than a conventional analogue audio signal (15-20KHz). While this bandwidth was beyond the scope of analogue recording technology of the time, video tape recorders did have the capacity to record signals with higher bandwidths.

If you have ever wondered where the 16 bit/ 44 Khz sampling standard for the CD came from, it was because in the early 1980s, when the CD standard was agreed, there was no other practical way of storing digital sound than by a PCM Converter & video recorder combination. As the wikipedia entry for the PCM adaptor explains, 'the sampling frequencies of 44.1 and 44.056 kHz were thus the result of a need for compatibility with the 25-frame (CCIR 625/50 countries) and 30-frame black and white (EIAN 525/60 countries) video formats used for audio storage at the time.' The sampling rate was adopted as the standard for CDs and, unlike many other things in our rapidly changing technological world, it hasn't changed since.

The fusion of digital and analogue technologies did not last long, and the introduction of DAT tapes in 1987 rendered the PCM digital converters/ video tape system largely obsolete. DAT recorders basically did the same job as PCM/ video but came in one, significantly smaller, machine. DAT machines had the added advantage of being able to accept multiple sampling rates (the standard 44.1 kHz, as well as 48kHz, and 32kHz, all at 16 bits per sample, and a special LP recording mode using 12 bits per sample at 32 kHz for extended recording time).

Problems with migrating early digital tape recordings

There will always be the risk with any kind of magnetic tape recordings that there won't be enough working tape machines to playback the material recorded on them in the future. As spare parts become harder to source, tapes with worn out transport mechanisms will simply become inoperable. We are not quite at this stage yet, and at Greatbear we have plenty of working U-matic, Betamax and VHS machines so don't worry too much! Machine obsolescence is however a real threat facing tape-based archives.

Such a problem comes into sharp relief when we consider the case of digital audio recordings made on analogue video tape machines. Audio recordings 'work' the tape transport in a far more vigorous fashion than your average domestic video tape user. It may be rewound and fast-forwarded more often, and in a professional environment may be in constant use, thus leading to greater wear and tear.

Those who chose to adopt digital early and made recordings on tape will have marvelled at the lovely clean recordings and the wonders of error correction technology. As a legacy format however, tape-based digital recordings are arguably more at risk than their analogue counterparts. They are doubly compromised by fragility of tape, and the particular problems that befall digital technologies when things go wrong.

'Edge damage' is very common in video tape and can happen when the tape transport becomes worn. This can alter the alignments of transport mechanism, leading it to move move up and down and crush the tape. As you can see in this photograph the edge of this tape has become damaged.

Because it is a digital recording, this has led to substantial problems with the transfer, namely that large sections of the recording simply 'drop out.' In instances such as these, where the tape itself has been damaged, analogue recordings on tape are infinitely more recoverable than digital ones. Dr W.C. John Van Bogart explains that

'even in instances of severe tape degradation, where sound or video quality is severely compromised by tape squealing or a high rate of dropouts, some portion of the original recording will still be perceptible. A digitally recorded tape will show little, if any, deterioration in quality up to the time of catastrophic failure when large sections of recorded information will be completely missing. None of the original material will be detectable in these missing sections.'

This risk of catastrophic, as opposed to gradual loss of information on tape based digital media, is what makes these recordings particularly fragile and at risk. What is particularly worrying about digital tape recordings is they may not show any external signs of damage until it is too late. We therefore encourage individuals, recording studios and memory institutions to assess the condition of their digital tape collections and take prompt action if the recorded information is valuable.

 The story of PCM digital processors and analogue tapes gives us a fascinating window into a time when we were not quite analogue, but not quite digital either, demonstrating how technologies co-evolve using the capacities of what is available in order to create something new.

For our PCM audio on video tape transfer services please follow this link: greatbear - PCM audio on video tape

Posted by debra in audio tape, digitisation expertise, 4 comments