digitisation expertise

Sharing insights and expertise around the digital transfer of audio and video assets

DAT restoration: The High – Martin Hannett Sessions

Record Store Day is usually 'the one day each year when over 200 independent record shops all across the UK come together to celebrate their unique culture. Special vinyl releases are made exclusively for the day, in what’s become one of the biggest annual events on the music calendar.' This year, due to COVID-19, Record Store Day is being split across 3 dates: 29th August, 26th September and 24th October.

This Record Store Day, Saturday 29th August 2020, is particularly exciting for Greatbear as it sees the release on Vinyl Revival, Manchester of The High - Martin Hannett Sessions, a restoration and digitisation project we worked on earlier this year.

The High - Martin Hannett Sessions on white vinyl © Vinyl Revival 2020

One of the Martin Hannett session DAT tapes digitised at Greatbear

Martin Hannett - Manchester music producer, known for his era-defining creative work with Buzzcocks, Joy Division, New Order, John Cooper Clarke, The Stone Roses, Happy Mondays and many others - died aged 42 in April 1991.

The tapes we received were DAT (Digital Audio Tape) masters, produced by Hannett at recording sessions with The High in 1989 (at Strawberry Studios) and 1991 (at Great Linford Manor), and included Hannett's last production work before his untimely death.

The High - Martin Hannett session at Strawberry Studios 1989: producer Martin Hannett / Hannett inspecting DAT manual. Stills from footage by Nigel Couzens.

The High - Martin Hannett session at Strawberry Studios 1989: mixing desk / Andy Couzens. Stills from footage by Nigel Couzens.

The High were formed in 1989 by former Turning Blue singer John Matthews and former Buzzcocks F.O.C. members Andy Couzens (guitar, also formerly of The Stone Roses and pre-Roses bands The Patrol and Waterfront), Simon Davies (bass), and drummer Chris Goodwin (also formerly of the Waterfront as well as the Inspiral Carpets). They were signed by London Records and had three UK Singles Chart hits in 1990 before breaking into the top 30 in 1991 with a revamped version of their debut single, the Martin Hannett-produced "Box Set Go".

The High DAT cassette insert card tracks 1-4

The High DAT cassette insert card tracks 5-9

analogue to digital

From the Nigel Couzens footage (see video clip below), it looks like the Strawberry Studios sessions were recorded to 2 inch analogue tape, on a 24 track Studer A80. This was quite an old machine at that time as there would have been the A800 and possibly the A820 available too - but maybe they just loved the sound on the A80.

DAT, introduced by Sony in 1987, became popular in the audio and recording industry for mastering during the 1990s. The initial recordings would be made to 2" (or other width) analogue tape, but the mixed and produced final versions would be recorded to DAT - allowing the benefits of lossless encoding and avoiding the addition of further analogue tape hiss at the mastering stage. This process could be seen as a stepping stone towards an emerging all-digital production chain, and the development of hard disk recording.

fragile tape

At 3.81mm wide and 0.013mm thick, DAT is more fragile than other cassette-based digital tape formats such as DTRS/DA-88, ADAT and PCM digital audio, or any of the reel-to-reel formats (analogue or digital).

This makes it vulnerable to ripping. The High - Martin Hannett Sessions DAT masters arrived at Greatbear with visible signs of mould growth along the edges of the tape. (See the fuzzy white threads along the surface of the tape pack in the pictures above and below.) When this happens, the mould sticks the layers of the tape together - particularly along the edges - which inevitably leads to the tape ripping under the high tension of playback.

A ripped tape is especially problematic because DAT uses a helical scan recording system, based on a miniature video transport, and so cannot be spliced for clean edits. (Splices also risk irreparable damage to heads on the drum of the playback machine.) A ripped DAT tape - the helically-imprinted signal being bisected - results in irreversible signal loss.

Red arrow showing point where a speck of mould caused this DAT to rip. (Not one of The High - Martin Hannett tapes, but one previously brought to Greatbear in this state!)

Disassembly: unscrewing The High DAT cassette shell to access tape inside

restoration

We've found the safest way to restore mould-stricken DAT cassettes to a playable state and avoid ripping is to:

  • Acclimatise the tape to the controlled temperature and humidity of the Greatbear studio, driving the mould spores to dormancy
  • Disassemble the cassette shell
  • Very slowly and carefully unwind and rewind the tape by hand, dislodging the 'sticky' mould
  • Re-house the spools in a new, clean shell
  • Digitise via multiple passes, cleaning the DAT machine between plays. For these tapes we used our Sony PCM 7040

Sony ceased production of new DAT machines in 2005, and working, professional machines are becoming rare. We spend a considerable (and usually enjoyable) amount of time and resources keeping our machines in good condition. The Sony PCM 7040 is one of the better DAT machines in terms of the robustness of the tape transport and machine parts availability, as the same transport system was used in many Sony DDS DAT drives used in computer backup.

The High - Martin Hannett Sessions DAT master shell open with white mould visible on surface of tape pack

DAT during manual unwinding, showing mould-induced tendency for tape to stick to itself

The problem of mould growth on DATs is not unique to these precious Hannett / The High recordings.

Most DATs are now between 20 - 30 years old, and it only takes one period of storage at high temperature and/or relative humidity (RH) for mould to set in. To avoid damage, magnetic tape must be stored consistently at levels of 18 - 21 °C, and at 45 - 50% RH - something which no garage, attic or back room can guarantee...

We regularly receive mouldy DATs at the Greatbear studio. So much important material was mastered to DAT in the 1990s, and its vulnerabilities make it a priority for digitisation.

Support your local independent record shop on Record Store Day and every day!

Transfer your Digital Audio Tapes (DATs) to a stable format!

 

Posted by melanie in audio tape, digitisation expertise, 0 comments

Pre-Figurative Digital Preservation

How do you start preserving digital objects if your institution or organisation has little or no capacity to do so?

Digital preservation can at first be bit-part and modular. You can build your capacity one step at a time. Once you’ve taken a few steps you can then put them together, making a ‘system’.

It’s always good to start from first principles, so make sure your artefacts are adequately described, with consistent file-naming and detailed contextual information.

You might want to introduce tools such as Fixity into your workflow, which can help you keep track of file integrity.

For audio visual content get familiar with MediaInfo and MediaConch, by MediaArea, QC Tools, by BAVC, or Exactly, by AVP.

 

 

Think of this approach as pre-figurative digital preservation. It’s the kind of digital preservation you can do even if you don’t (yet) have a large scale digital repository. Pre-figurative digital preservation is when you organise and regularly assess the condition of your collections as if it is managed in a large repository.

So when that day comes and you get the digital content management system you deserve, those precious zeros and ones can be ingested with relative ease, ready to be managed through automated processes. Pre-figurative digital preservation is an upgrade on the attitude that preserving files to make them accessible, often using lossy compression, is ‘good enough’ (we all know that’s not good enough!!)

Pre-figurative digital preservation can help you build an information system that fits your needs and capacities. It is a way to do something rather than avoid the digital preservation ‘problem’ because it seems too big and technically complex.

Learning New Skills

The challenge of managing digitised and born-digital material means archivists will inevitably have to learn new skills. This can feel daunting and time as an archivist we have recently worked with told us:

‘I would love to acquire new skills but realistically there’s going to be a limit to how much I can learn of the technical stuff. This is partly because I have very small brain but also partly because we have to stretch our resources very thin to cover all the things we have to do as well as digital preservation.’

Last year the Society of American Archivists launched the Try5 for Ongoing Growth initiative. It offers a framework for archivists who want to develop their technological knowledge. The idea is you learn 5 new technical skills, share your experience (using #Try5SAA) and then help someone else on the basis of what you’ve learnt.

Bertram Lyons from AV Preserve outlined 5 things the under-confident but competence hungry (audiovisual) archivist could learn to boost their skill set.

These include getting familiar with your computer’s Command Line Interface (CLI), creating and running Checksums, Digital File Packaging, Embedding and Extracting Metadata and understanding Digital Video. Lyons provides links to tutorials and resources that are well worth exploring.

Expanding, bit by bit

If your digital collections are expanding bit by bit and you are yet to tackle the digital elephant in the room, it may well be time to try pre-figurative digital preservation.

We’d love to hear more from archivists whose digital preservation system has evolved in a modular fashion. Let us know in the comments what approaches and tools you have found useful.

 

Posted by debra in audio / video heritage, audio tape, digitisation expertise, 0 comments

VHS – Re-appraising Obsolescence

VHS was a hugely successful video format from the late 1970s to early 2000s. It was adopted widely in domestic and professional contexts.

Due to its familiarity and apparent ubiquity you might imagine it is easy to preserve VHS.

Well, think again.

VHS is generally considered to be a low preservation risk because playback equipment is still (just about) available.

There is, however, a huge degree of variation within VHS. This is even before we consider improvements to the format, such as S-VHS (1987), which increased luminance bandwidth and picture quality.

Complicating the preservation picture

The biggest variation within VHS is of recording speed.

Recording speed affects the quality of the recording. It also dictates which machines you can use to play back VHS tapes.

2 large, light-coloured professional video machines with digital counters, needle gauges and multiple dials

SONY SVO-500P and Panasonic AG-650

Domestic VHS could record at three different speeds: Standard Play, which yielded the best quality recordings; Long Play, which doubled recording time but compromised the quality of the recording; Extended or Super Long Play, which trebled recording time but significantly reduced the recording quality. Extended/ Super Long Play was only available on the NTSC standard.

It is generally recognised that you should always use the best quality machines at your disposal to preserve magnetic media.

VHS machines built for domestic use, and the more robust, industrial models vary significantly in quality.

Richard Bennette in The Videomaker wrote (1995): ‘In more expensive VCRs, especially industrial models, the transports use thicker and heavier mounting plates, posts and gears. This helps maintain the ever-critical tape signal distances over many more hours of usage. An inexpensive transport can warp or bend, causing time base errors in the video signals’.

Yet better quality VHS machines, such as the Sony SVO-5800P and Panasonic AG-8700 that we use in the Greatbear Studio, cannot play back Long or Extended Play recordings. They only recorded—and therefore can only play back—Standard Play signals.

This means that recordings made at slower speeds can only be transferred using domestic VHS machines, such as the JVC HM-DR10000 D-VHS or the JVC HR-DVS3 EK.

Domestic VHS tape: significant problems to come

This poses two significant problems within a preservation context.

Firstly, there is concern about the availability of high-functioning domestic VHS machines in the immediate and long-term.

Domestic VHS machines were designed to be mass produced and affordable to the everyday consumer. Parts were made from cheaper materials. They simply were not built to last.

JVC stopped manufacturing standalone VHS machines in 2008.

Used VHS machines are still available. Given the comparative fragility of domestic machines, the ubiquity of the VHS format—especially in its domestic variation—is largely an illusion.

The second problem is the quality of the original Long or Extended Play recording.

silver and black slimline VHS machine

JVC Super-VHS ET

One reason for VHS’s victory over Betamax in the ‘videotape format wars’ was that VHS could record for three hours, compared with Betamax’s one.

As with all media recorded on magnetic tape, slower recording speeds produce poorer quality video and audio.

An Extended Play recording made on a domestic VHS is already in a compromised position, even before you put it in the tape machine and press ‘play.’

Which leads us to a further and significant problem: the ‘press play’ moment.

Interchangeability—the ability to play back a tape on a machine different to the one it was recorded on—is a massive problem with video tape machines in general.

The tape transport is a sensitive mechanism and can be easily knocked out of sync. If the initial recording was made with a mis-aligned machine it is not certain to play back on another, differently aligned machine. Slow recording complicates alignment further, as there is more room for error in the recording process.

The preservation of Long and Extended Play VHS recordings is therefore fraught with challenges that are not always immediately apparent.

(Re)appraising VHS

Aesthetically, VHS continues to be celebrated in art circles for its rendering of the ‘poor image’. The decaying, unstable appearance of the VHS signal is a direct result of extended recording times that threaten its practical ability to endure.

Variation of recording time is the key point of distinction within the VHS format. It dramatically affects the quality of the original recording and dictates the equipment a tape can be played back on. With this in mind, we need to distinguish between standard, long and extended play VHS recordings when appraising collections, rather than assuming ‘VHS’ covers everything.

One big stumbling block is that you cannot tell the recording speed by looking at the tape itself. There may be metadata that can indicate this, or help you make an educated guess, but this is not always available.

We recommend, therefore, to not assume VHS—and other formats that straddle the domestic/ professional divide such as DVCAM and 8mm video—is ‘safe’ from impending obsolescence. Despite the apparent availability and familiarity of VHS, the picture in reality is far more complex and nuanced.

***

As ever, Greatbear are more than happy to discuss specific issues affecting your collection.

Get in touch with us to explore how we can work together.

Posted by debra in digitisation expertise, video tape, 1 comment

SONY’s U-matic video cassette

Introduced by SONY in 1971 U-matic was, according to Jeff Martin, 'the first truly successful videocassette format'.

Philips’ N-1500 video format dominated the domestic video tape market in the 1970s. By 1974 U-matic was widely adopted in industrial and institutional settings. The format also performed a key role in the development of Electronic News Gathering. This was due to its portability, cost effectiveness and rapid integration into programme workflow. Compared with 16mm film U-matic had many strengths.

The design of the U-matic case mimicked a hardback book. Mechanical properties were modelled on the audio cassette's twin spool system.

Like the Philips compact audio cassette developed in the early 1960s, U-matic was a self-contained video playback system. This required minimal technical skill and knowledge to operate.

There was no need to manually lace the video tape through the transport, or even rewind before ejection like SONY's open reel video tape formats, EIAJ 1/2" and 1" Type C. Stopping and starting the tape was immediate, transferring different tapes quick and easy. U-matic ushered in a new era of efficiency and precision in video tape technology.

Mobile news-gathering on U-matic video tape

Emphasising technical quality and user-friendliness was key to marketing U-matic video tape.

As SONY's product brochure states, 'it is no use developing a TV system based on highly sophisticated knowledge if it requires equally sophisticated knowledge to be used'.

'The 'ease of operation' is demonstrated in publicity brochures in a series of images. These guide the prospective user through tape machine interface. The human operator, insulated from the complex mechanical principles making the machine tick only needs to know a few things: how to feed content and direct pre-programmed functions such as play, record, fast forward, rewind and stop.

New Applications

Marketing material for audio visual technology often helps the potential buyer imagine possible applications. This is especially true when a technology is new.

For SONY’s U-matic video tape it was the ‘very flexibility of the system’ that was emphasised. The brochure recounts a story of an oil tanker crew stationed in the middle of the Atlantic.

After they watch a football match the oil workers sit back and enjoy a new health and safety video. ‘More inclined to take the information from a television set,’ U-matic is presented as a novel way to combine leisure and work.

Ultimately ‘the obligation for the application of the SONY U-matic videocassette system lies with the user…the equipment literally speaks for itself.’

International Video Networks

Before the internet arrived, SONY believed video tape was the media to connect global businesses.

'Ford, ICI, Hambro Life, IBM, JCB...what do these companies have in common, apart from their obvious success? Each of these companies, together with many more, have accepted and installed a new degree of communications technology, the U-matic videocassette system. They need international communication capability. Training, information, product briefs, engineering techniques, sales plans…all can be communicated clearly, effectively by means of television'.

SONY heralded videotape's capacity to reach 'any part of the world...a world already revolutionised by television.' Video tape distributed messages in 'words and pictures'. It enabled simultaneous transmission and connected people in locations as 'wide as the world's postal networks.' With appropriate equipment interoperability between different regional video standards - PAL, NTSC and SECAM - was possible.

Video was imagined as a powerful virtual presence serving international business communities. It was a practical money-saving device and effective way to foster inter-cultural communication: 'Why bring 50 salesmen from the field into Head Office, losing valuable working time when their briefing could be sent through the post?'

Preserving U-Matic Video Tape

According the Preservation Self-Assessment Program, U-matic video tape ‘should be considered at high preservation risk’ due to media and hardware obsolescence. A lot of material was recorded on the U-matic format, especially in media and news-gathering contexts. In the long term there is likely to be more tape than working machines.

Despite these important concerns, at Greatbear we find U-matic a comparatively resilient format. Part of the reason for this is the ¾” tape width and the presence of guard bands that are part of the U-matic video signal. Guard bands were used on U-matic to prevent interference or ‘cross-talk’ between the recorded tracks.

In early video tape design guard bands were seen as a waste of tape. Slant azimuth technology, a technique which enabled stripes to be recorded next to each other, was integrated into later formats such as Betamax and VHS. As video tape evolved it became a whole lot thinner.

In a preservation context thinner tape can pose problems. If tape surface is damaged and there is limited tape it is harder to read a signal during playback. In the case of digital tape, damage on a smaller surface can result in catastrophic signal loss. Analogue formats such as U-matic, often fare better, regardless of age.

Paradoxically it would seem that the presence of guard bands insulates the recorded signal from total degradation: because there is more tape there is a greater margin of error to transfer the recorded signal.

Like other formats, such as the SONY EIAJ, certain brands of U-matic tape can pose problems. Early SONY, Ampex and Kodak branded tape may need dehydration treatment ('baked') to prevent shedding during playback. If baking is not appropriate, we tend to digitise in multiple passes, allowing us to frequently intervene to clean the video heads of potentially clogging material. If your U-matic tape smells of wax crayons this is a big indication there are issues. The wax crayon smell seems only to affect SONY branded tape.

Concerns about hardware obsolescence should of course be taken seriously. Early 'top loading' U-matic machines are fairly unusable now.

Mechanical and electronic reliability for 'front loading' U-matic machines such as the BVU-950 remains high. The durability of U-matic machines becomes even more impressive when contrasted with newer machines such as the DVC Pro, Digicam and Digibeta. These tend to suffer relatively frequent capacitor failure.

Later digital video tape formats also use surface-mounted custom-integrated circuits. These are harder to repair at component level. Through-hole technology, used in the circuitry of U-matic machines, make it easier to refurbish parts that are no longer working.

 

Transferring your U-matic Collections

U-matic made video cassette a core part of many industries. Flexible and functional, its popularity endured until the 1990s.

Greatbear has a significant suite of working NTSC/ PAL/ SECAM U-matic machines and spare parts.

Get in touch by email or phone to discuss transferring your collection.

Through-hole technology

Posted by debra in digitisation expertise, video tape, video technology, machines, equipment, 0 comments

Going CD-R-less – digital file-based delivery

Often customers ask us to deliver their transferred sound files on CD, in effect an audio CD-R of the transfer.

Although these recordings can still be high resolution there remains a world of difference—in an archival sense—between a CD-R, burnt on a computer drive (however high the quality of drive and disc), and CD recordings made in the context of the professional music industry.

The CD format is far from ‘obsolete,‘ and recent history has shown us repeatedly that formats deemed ‘dead’, such as vinyl or the audio cassette, can become fashionable again.

Yet when it comes to the preservation of your audio and video archives, it is a good idea to think about this material differently. It is one thing to listen to your favourite artist on CD, in other words, but that precious family recording of your Grandfather discussing his life history on a CD-R is different.

Because of this, we believe that supplying customers with digital files, on hard drive on USB stick is, in 2016 and beyond, a much better option. Holding a recording in physical form in the palm of your hand can be reassuring. Yet if you’ve transferred valuable recordings to ensure you can listen to them once…

Why risk having to do it again?

CD-Rs are, quite simply, not a reliable archival medium. Even optical media that claims spectacular longevity, such as the 1000 year proof M-Disc, are unlikely to survive the warp and weft of technological progress.

Exposure to sunlight can render CD-Rs and DVDs unreadable. If the surface of a CD-R becomes scratched, its readability is severely compromised.

There is also the issue of compatibility between burners and readers, as pointed out in the ARSC Guide to Audio Preservation:

There are standards for CD-R discs to facilitate the interchange of discs between burners and readers. However, there are no standards covering the burners or readers themselves, and the disc standards do not take preservation or longevity into consideration. Several different burning and reading speeds were developed, and earlier discs or burners are not compatible with later, faster speeds. As a result, there is considerable variability in whether any given disc can be read by any given reader (30).

Furthermore, disc drives on computers are becoming less common. It would therefore be unwise to exclusively store valuable recordings on this medium if you want them to have the best chance of long time survival.

In short, the CD-R is just another obsolete format (and an unreliable one at that). Of course, once you have the digital files there is nothing stopping you from making access copies on CD-R for friends and family. Having the digital files as source format gives you greater flexibility to share, store and duplicate your archival material.

File-based preservation

The threat of obsolescence haunts all digital media, to a degree. There is no one easy, catchall solution to preserve the media we produce now which is, almost exclusively, digital.

Yet given the reality of the situation, and the desire people harbour to return to recordings that are important to them, it makes sense that non-experts gain a basic understanding of what digital preservation may entail for them.

There are a growing amount of online resources for people who want to get familiar with the rudiments of personal digital archiving. It would be very difficult to cover all the issues below, so comments are limited to a few observations.

It is true that managing a digital collection requires a different kind of attitude – and skill set – to analogue archiving that is far less labour intensive. You cannot simply transfer your digital files onto a hard drive, put it on the shelf and forget about it for ten-fifteen years. If you were to do this, there is a very real possibility the file could not be opened when you return to it.

taking-good-care-of-personal-archive-dpc-2015

Screenshot taken from the DPC guide to Personal Digital Archiving

As Gabriela Redwine explains in the Digital Preservation Coalition’s Technology Watch Report on Personal Digital Archiving, ‘the reality of ageing hardware and software requires us to be actively attuned to the age and condition of the digital items in our care.’ The emerging personal digital archivist therefore needs to learn how to practice actively engaging with their collections if their digital files are to survive in the long term.

Getting to grips with digital preservation, even at a basic level, will undoubtedly involve learning a variety of new skills, terms and techniques. Yet there are some simple, and fairly non-technical, things you can do to get started.

The first point to emphasise is the importance of saving files in more than one location. This is probably the most basic principle of digital preservation.

The good news about digital files is they can be moved, copied and shared with family and friends all over the world with comparable ease. So if there is a fire in one location, or a computer fails in another, it is likely that the file will still be safe in the other place where it is stored.

Employing consistent and clear file naming is also very important, as this enables files to be searched for and found easily.

Beyond this, things get a little more complicated and a whole lot more computer-based. We move into the more specialist area of digital preservation with its heady language of metadatachecksums and emulation, among other terms.

The need for knowledge and competencies

At present it can feel like there is a chasm between the world of private digital archiving, where people rely on third party solutions such as Google or Amazon to store and manage their files, and the professional field of digital preservation, which is populated by tech-specialists and archival whizz-kids.

The reality is that as we move deeper into the digital, file-based future, ordinary people will need to adopt existing preservation tools if they are to learn how to manage their digital collections in a more direct and informed way.

Take, for example, the often cited recommendation for people to migrate or back up their collections on different media at annual or bi-annual intervals. While this advice may be sound, should people be doing this without profiling the file integrity of their collections first? What’s the point in migrating a collection of files, in other words, if half of those files are already corrupted?

In such instances as these, the everyday person may wish to familiarise themselves with existing software tools that can be used to assess and identify potential problems with their personal collections.

DROID (Digital Record Object IDentification), for example, a software tool developed by the UK National Archives, profiles files in your collection in order to facilitate ‘digital continuity’, ‘the ability to use digital information in the way that you need, for as long as you need.’

The open source software can identify over 200 of the most common document, image, audio and video files. It can help tell you what versions you have, their age and size, and when they were last changed. It can also help you find duplicates, and manage your file space more efficiently. DROID can be used to scan individual files or directories, and produces this information in a summary report. If you have never assessed your files before it may prove particularly useful, as it can give a detailed overview.

A big draw back of DROID is that it requires programming knowledge to install, so is not immediately accessible to those without such specialist skills. Fixity is a more user-friendly open source software tool that can enable people to monitor their files, tracking file changes or corruptions. Tools like Fixity and DROID do not ensure that digital files are preserved on their own; they help people to identify and manage problems within their collections. A list of other digital preservation software tools can be found here.

For customers of Greatbear, who are more than likely to be interested in preserving audiovisual archives, AV Preserve have collated a fantastic list of tools that can help people both manage and practice audiovisual preservation. For those interested in the different scales of digital preservation that can be employed, the NDSA (National Digital Stewardship Alliance) Levels of Preservation offers a good overview of how a large national institution envisions best practice.

Tipping Points

We are, perhaps, at a tipping point for how we play back and manage our digital data. The 21st century has been characterised by the proliferation of digital artefacts and memories. The archive, as the fundamental shaper of individual and community identities, has taken central stage in our lives.

With this unparalleled situation, new competencies and confidences certainly need to be gained if the personal archiving of digital files is to become an everyday reality at a far more granular and empowered level than is currently the norm.

Maybe, one day, checking the file integrity of one’s digital collection will be seen as comparable to other annual or bi-annual activities, such as going to the dentist or taking the car for its MOT.

We are not quite there yet, that much is certain. This is largely because companies such as Google make it easy for us to store and efficiently organise personal information in ways that feel secure and manageable. These services stand in stark contrast to the relative complexity of digital preservation software, and the computational knowledge required to install and maintain it (not to mention the amount of time it could take to manage one’s digital records, if you really dedicated yourself to it).

Growing public knowledge about digital archiving, the desire for knowledge and new competencies, as well as the pragmatic fact that digital archives are easier to manage in file-based systems, may encourage the gap between professional digital preservation practices and the interests of everyday, digital citizens, to gradually close over time. Dialogue and greater understanding is most certainly needed if we are to move forward from the current context.

Greatbear want to be part of this process by helping customers have confidence in file-based delivery, rather than rely on formats that are obsolete, of poorer quality and counter-intuitive to the long term preservation of audio visual archives.

We are, as ever, happy to explain the issues in more detail, so please do contact us if there are issues you want to discuss.

We also provide a secure CD to digital file transcription service: Digital audio (CD-DA), data (CD-ROM), audio and data write-once (CD-R) and rewritable media (CD-RW) disc transfer.

Posted by debra in audio tape, digitisation expertise, 0 comments

Videokunstarkivet’s Mouldy U-matic Video Tapes

Lives and Videotapes Last year we featured the pioneering Norwegian Videokunstarkivet (Video Art Archive) on the Greatbear tape blog.

In one of our most popular posts, we discussed how Videokunstarkivet has created a state of the video art archive using open source software to preserve, manage and disseminate Norway’s video art histories for contemporary audiences and beyond.

In Lives and Videotapes, the beautiful collection of artist’s oral histories collected as part of the Videokunstarkivet project, the history of Norwegian video art is framed as ‘inconsistent’.

This is because, Mike Sperlinger eloquently writes, ‘in such a history, you have navigate by the gaps and contradictions and make these silences themselves eloquent. Videotapes themselves are like lives in that regard, the product of gaps and dropout—the shedding not only of their material substance, but of the cultural categories which originally sustained them’ (8).

The question of shedding, and how best to preserve the integrity of audiovisual archive object is of course a vexed one that we have discussed at length on this blog.

It is certainly an issue for the last collection of tapes that we received from Videokunstarkivet—a number of very mouldy U-matic tapes.

umatic-dry-mould-inside-cassette-shell According to the Preservation Self-Assessment Program website, ‘due to media and hardware obsolescence’ U-matic ‘should be considered at high preservation risk.’

At Greatbear we have stockpiled quite a few different U-matic machines which reacted differently to the Videokunstarkivet tapes.

As you can see from the photo, they were in a pretty bad way.

 Note the white, dusty-flaky quality of the mould in the images. This is what tape mould looks like after it has been rendered inactive, or ‘driven into dormancy.’ If mould is active it will be wet, smudging if it is touched. In this state it poses the greatest risk of infection, and items need to be immediately isolated from other items in the collection.

Once the mould has become dormant it is fairly easy to get the mould off the tape using brushes, vacuums with HEPA filters and cleaning solutions. We also used a machine specifically for the cleaning process, which was cleaned thoroughly afterwards to kill off any lingering mould.

The video tape being played back on vo9800 U-matic

This extract  demonstrates how the VO9800 replayed the whole tape yet the quality wasn’t perfect. The tell-tale signs of mould infestation are present in the transferred signal.

Visual imperfections, which begin as tracking lines and escalate into a fuzzy black out of the image, is evidence of how mould has extended across the surface of the tape, preventing a clear reading of the recorded information.

Despite this range of problems, the V09800 replayed the whole tape in one go with no head clogs.

SONY BVU 950

The video tape being played back on SONY BVU 950

In its day, the BVU950 was a much higher specced U-matic machine than the VO9800. As the video extract demonstrates, it replayed some of the tape without the artefacts produced by the V09800 transfer, probably due to the deeper head tip penetration.

Yet this deeper head penetration also meant extreme tape head clogs on the sections that were affected badly by mould—even after extensive cleaning.

This, in turn, took a significant amount of time to remove the shedded material from the machine before the transfer could continue.

Mould problems

The play back of the tapes certainly underscores how deeply damaging damp conditions are for magnetic tape collections, particularly when they lead to endemic mould growth.

Yet the quality of the playback we managed to achieve also underlines how a signal can be retrieved, even from the most mould-mangled analogue tapes. The same cannot be said of digital video and audio, which of course is subject to catastrophic signal loss under similar conditions.

As Mike Sperlinger writes above, the shedding and drop outs are important artefacts in themselves. They mark the life-history of magnetic tapes, objects which so-often exist at the apex of neglect and recovery.

The question we may ask is: which transfer is better and more authentic? Yet this question is maddeningly difficult to answer in an analogue world defined by the continuous variation of the played back signal. And this variation is certainly amplified within the context of archival transfers when damage to tape has become accelerated, if not beyond repair.

At Greatbear we are in the good position of having a number of machines which enables us to test and experiment different approaches.

One thing is clear: for challenging collections, such as these items from the Videokunstarkivet, there is no one-size-fits-all answer to achieve the optimal transfer.

Posted by debra in audio / video heritage, digitisation expertise, video tape, 2 comments

Mouldy DATs

We have previously written on this blog about the problems that can occur when transferring Digital Audio Tapes (DATs).

According to preliminary findings from the British Library’s important survey of the UK’s sound collections, there are 3353 DAT tapes in the UK’s archives.

While this is by no means a final figure (and does not include the holdings of record companies and DATheads), it does suggest there is a significant amount of audio recorded on this obsolete format which, under certain conditions, is subject to catastrophic signal loss.

The conditions we are referring to is that old foe of magnetic tape: mould.

In contrast with existing research about threats to DAT, which emphasise how the format is threatened by ‘known playback problems that are typically related to mechanical alignment’, the biggest challenges we consistently face with DATs is connected to mould.

It is certainly acknowledged that ‘environmental conditions, especially heat, dust, and humidity, may also affect cassettes.’

Nevertheless, the specific ways mould growth compromise the very possibility of successfully playing back a DAT tape have not yet been fully explored. This in turn shapes the kinds of preservation advice offered about the format.

What follows is an attempt to outline the problem of mould growth on DATs which, even in minimal form, can pretty much guarantee the loss of several seconds of recording.

DAT Tape Size Tape width issues

The first problem with DATs is that they are 4mm wide, and very thin in comparison to other forms of magnetic tape.

The size of the tape is compounded by the helical method used in the format, which records the signal as a diagonal stripe across the tape. Because tracks are written onto the tape at an angle, if the tape splits it is not a neat split that can be easily spliced together.

The only way to deal with splits is to wind the tape back on to the tape transport or use leader tape to stick the tape back together at the breaking point.

Either way, you are guaranteed to lose a section of the tape because the helical scan has imprinted the recorded signal at a sharp, diagonal angle. If a DAT tape splits, in other words, it cuts through the diagonal signal, and because it is digital rather than analogue audio, this results in irreversible signal loss.

And why does the tape split? Because of the mould!

If you play back a DAT displaying signs of dormant mould-growth it is pretty much guaranteed to split in a horrible way. The tape therefore needs to be disassembled and wound by hand. This means you can spend a lot of time restoring DATs to a playable condition.

Rewinding by hand is however not 100% fool-proof, and this really highlights the challenges of working with mouldy DAT tape.

Often mould on DATs is visible on the edge of the tape pack because the tape has been so tightly wound it doesn’t spread to the full tape surface.

In most cases with magnetic tape, mould on the edge is good news because it means it has not spread and infected the whole of the tape. Not so with DAT.

Even with tiny bits of mould on the edge of the tape there is enough to stick it to the next bit of tape as it is rewound.

When greater tension is applied in an attempt to release the mould, due to stickiness, the tape rips.

A possible and plausible explanation for DAT tape ripping is that due to the width and thinness of the tape the mould is structurally stronger than the tape itself, making it easier for the mould growth to stick together.

When tape is thicker, for example with a 1/4 ” open reel tape, it is easier to brush off the dormant mould which is why we don’t see the ripping problem with all kinds of tape.

Our experience confirms that brushing off dormant mould is not always possible with DATs which, despite best efforts, can literally peel apart because of sticky mould.

What, then, is to be done to ensure that the 3353 (and counting) DAT tapes in existence remain in a playable condition?

One tangible form of action is to check that your DATs are stored at the appropriate temperature (40–54°F [4.5–12°C]) so that no mould growth develops on the tape pack.

The other thing to do is simple: get your DAT recordings reformatted as soon as possible.

While we want to highlight the often overlooked issue of mould growth on DATs, the problems with machine obsolescence, a lack of tape head hours and mechanical alignment problems remain very real threats to successful transfer of this format.

Our aim at the Greatbear is to continue our research in the area of DAT mould growth and publish it as we learn more.

As ever, we’d love to hear about your experiences of transferring mouldy DATs, so please leave a comment below if you have a story to share.

 

Posted by debra in audio tape, digitisation expertise, 2 comments

Codecs and Wrappers for Digital Video

In the last Greatbear article we quoted sage advice from the International Association of Audiovisual Archivists: ‘Optimal preservation measures are always a compromise between many, often conflicting parameters.’ [1]

While this statement is true in general for many different multi-format collections, the issue of compromise and conflicting parameters becomes especially apparent with the preservation of digitized and born-digital video. The reasons for this are complex, and we shall outline why below.

Lack of standards (or are there too many formats?)

Carl Fleischhauer writes, reflecting on the Federal Agencies Digitization Guidelines Initiative (FADGI) research exploring Digital File Formats for Videotape Reformatting (2014), ‘practices and technology for video reformatting are still emergent, and there are many schools of thought. Beyond the variation in practice, an archive’s choice may also depend on the types of video they wish to reformat.’ [2]

We have written in depth on this blog about the labour intensity of digital information management in relation to reformatting and migration processes (which are of course Greatbear’s bread and butter). We have also discussed how the lack of settled standards tends to make preservation decisions radically provisional.

In contrast, we have written about default standards that have emerged over time through common use and wide adoption, highlighting how parsimonious, non-interventionist approaches may be more practical in the long term.

The problem for those charged with preserving video (as opposed to digital audio or images) is that ‘video, however, is not only relatively more complex but also offers more opportunities for mixing and matching. The various uncompressed-video bitstream encodings, for example, may be wrapped in AVI, QuickTime, Matroska, and MXF.’ [3]

What then, is this ‘mixing and matching’ all about?

It refers to all the possible combinations of bitsteam encodings (‘codecs’) and ‘wrappers’ that are available as target formats for digital video files. Want to mix your JPEG2000 – Lossless with your MXF, or ffv1 with your AVI? Well, go ahead!

What then is the difference between a codec and wrapper?.

As the FADGI report states: ‘Wrappers are distinct from encodings and typically play a different role in a preservation context.’ [4]

The wrapper or ‘file envelope’ stores key information about the technical life or structural properties of the digital object. Such information is essential for long term preservation because it helps to identify, contextualize and outline the significant properties of the digital object.

Information stored in wrappers can include:

  • Content (number of video streams, length of frames),
  • Context (title of object, who created it, description of contents, re-formatting history),
  • Video rendering (Width, Height and Bit-depth, Colour Model within a given Colour Space, Pixel Aspect Ratio, Frame Rate and Compression Type, Compression Ratio and Codec),
  • Audio Rendering – Bit depth and Sample Rate, Bit Rate and compression codec, type of uncompressed sampling.
  • Structure – relationship between audio, video and metadata content. (adapted from the Jisc infokit on High Level Digitisation for Audiovisual Resources)

Codecs, on the other hand, define the parameters of the captured video signal. They are a ‘set of rules which defines how the data is encoded and packaged,’ [5] encompassing Width, Height and Bit-depth, Colour Model within a given Colour Space, Pixel Aspect Ratio and Frame Rate; the bit depth and sample rate and bit rate of the audio.

Although the wrapper is distinct from the encoded file, the encoded file cannot be read without its wrapper. The digital video file, then, comprises of wrapper and at least one codec, often two, to account for audio and images, as this illustration from AV Preserve makes clear.

Codecs and Wrappers

Diagram taken from AV Preserve’s A Primer on Codecs for Moving Image and Sound Archives

Pick and mix complexity

Why then, are there so many possible combinations of wrappers and codecs for video files, and why has a settled standard not been agreed upon?

Fleischhauer at The Signal does an excellent job outlining the different preferences within practitioner communities, in particular relating to the adoption of ‘open’ and commercial/ proprietary formats.

Compellingly, he articulates a geopolitical divergence between these two camps, with those based in the US allegedly opting for commercial formats, and those in Europe opting for ‘open.’ This observation is all the more surprising because of the advice in FADGI’s Creating and Archiving Born Digital Video: ‘choose formats that are open and non-proprietary. Non-proprietary formats are less likely to change dramatically without user input, be pulled from the marketplace or have patent or licensing restrictions.’ [6]

One answer to the question: why so many different formats can be explained by different approaches to information management in this information-driven economy. The combination of competition and innovation results in a proliferation of open source and their proprietary doubles (or triplets, quadruples, etc) that are constantly evolving in response to market ‘demand’.

Impact of the Broadcast Industry

An important area to highlight driving change in this area is the role of the broadcast industry.

Format selections in this sector have a profound impact on the creation of digital video files that will later become digital archive objects.

In the world of video, Kummer et al explain in an article in the IASA journal, ‘a codec’s suitability for use in production often dictates the chosen archive format, especially for public broadcasting companies who, by their very nature, focus on the level of productivity of the archive.’ [7] Broadcast production companies create content that needs to be able to be retrieved, often in targeted segments, with ease and accuracy. They approach the creation of digital video objects differently to how an archivist would, who would be concerned with maintaining file integrity rather ensuring the source material’s productivity.

Furthermore, production contexts in the broadcast world have a very short life span: ‘a sustainable archiving decision will have to made again in ten years’ time, since the life cycle of a production system tends to be between 3 and 5 years, and the production formats prevalent at that time may well be different to those in use now.’ [8]

Take, for example, H.264/ AVC ‘by far the most ubiquitous video coding standard to date. It will remain so probably until 2015 when volume production and infrastructure changes enable a major shift to H.265/ HEVC […] H.264/ AVC has played a key role in enabling internet video, mobile services, OTT services, IPTV and HDTV. H.264/ AVC is a mandatory format for Blu-ray players and is used by most internet streaming sites including Vimeo, youtube and iTunes. It is also used in Adobe Flash Player and Microsoft Silverlight and it has also been adopted for HDTV cable, satellite, and terrestrial broadcasting,’ writes David Bull in his book Communicating Pictures.

HEVC, which is ‘poised to make a major impact on the video industry […] offers to the potential for up to 50% compression efficiency improvement over AVC.’ Furthermore, HEVC has a ‘specific focus on bit rate reduction for increased video resolutions and on support for parallel processing as well as loss resilience and ease if integration with appropriate transport mechanisms.’ [9]

CODEC Quality Chart3 Increased compression

The development of codecs for use in the broadcast industry deploy increasingly sophisticated compression that reduce bit rate but retain image quality. As AV Preserve explain in their codec primer paper, ‘we can think of compression as a second encoding process, taking coded information and transferring or constraining it to a different, generally more efficient code.’ [10]

The explosion of mobile, video data in the current media moment is one of the main reasons why sophisticated compression codecs are being developed. This should not pose any particular problems for the audiovisual archivist per se—if a file is ‘born’ with high degrees of compression the authenticity of the file should not ideally, be compromised in subsequent migrations.

Nevertheless, the influence of the broadcast industry tells us a lot about the types of files that will be entering the archive in the next 10-20 years. On a perceptual level, we might note an endearing irony: the rise of super HD and ultra HD goes hand in hand with increased compression applied to the captured signal. While compression cannot, necessarily, be understood as a simple ‘taking away’ of data, its increased use in ubiquitous media environments underlines how the perception of high definition is engineered in very specific ways, and this engineering does not automatically correlate with capturing more, or better quality, data.

Like error correction that we have discussed elsewhere on the blog, it is often the anticipation of malfunction that is factored into the design of digital media objects. These, in turn, create the impression of smooth, continuous playback—despite the chaos operating under the surface. The greater clarity of the visual image, the more the signal has been squeezed and manipulated so that it can be transmitted with speed and accuracy. [11]

MXF

Staying with the broadcast world, we will finish this article by focussing on the MXF wrapper that was ‘specifically designed to aid interoperability and interchange between different vendor systems, especially within the media and entertainment production communities. [MXF] allows different variations of files to be created for specific production environments and can act as a wrapper for metadata & other types of associated data including complex timecode, closed captions and multiple audio tracks.’ [12]

The Presto Centre’s latest TechWatch report (December 2014) asserts ‘it is very rare to meet a workflow provider that isn’t committed to using MXF,’ making it ‘the exchange format of choice.’ [13] MXF

We can see such adoption in action with the Digital Production Partnership’s AS-11 standard, which came into operation October 2014 to streamline digital file-based workflows in the UK broadcast industry.

While the FADGI reports highlights the instability of archival practices for video, the Presto Centre argue that practices are ‘currently in a state of evolution rather than revolution, and that changes are arriving step-by-step rather than with new technologies.’

They also highlight the key role of the broadcast industry as future archival ‘content producers,’ and the necessity of developing technical processes that can be complimentary for both sectors: ‘we need to look towards a world where archiving is more closely coupled to the content production process, rather than being a post-process, and this is something that is not yet being considered.’ [14]

The world of archiving and reformatting digital video is undoubtedly complex. As the quote used at the beginning of the article states, any decision can only ever be a compromise that takes into account organizational capacities and available resources.

What is positive is the amount of research openly available that can empower people with the basics, or help them to delve into the technical depths of codecs and wrappers if so desired. We hope this article will give you access to many of the interesting resources available and some key issues.

As ever, if you have a video digitization project you need to discuss, contact us—we are happy to help!

References:

[1] IASA Technical Committee (2014) Handling and Storage of Audio and Video Carriers, 6. 

[2] Carl Fleischhauer, ‘Comparing Formats for Video Digitization.’ http://blogs.loc.gov/digitalpreservation/2014/12/comparing-formats-for-video-digitization/.

[3] Federal Agencies Digital Guidelines Initiative (FADGI), Digital File Formats for Videotape Reformatting Part 5. Narrative and Summary Tables. http://www.digitizationguidelines.gov/guidelines/FADGI_VideoReFormatCompare_pt5_20141202.pdf, 4.

[4] FADGI, Digital File Formats for Videotape, 4.

[5] AV Preserve (2010) A Primer on Codecs for Moving Image and Sound Archives & 10 Recommendations for Codec Selection and Managementwww.avpreserve.com/wp-content/…/04/AVPS_Codec_Primer.pdf, 1.

‎[6] FADGI (2014) Creating and Archiving Born Digital Video Part III. High Level Recommended Practices, http://www.digitizationguidelines.gov/guidelines/FADGI_BDV_p3_20141202.pdf, 24.
[7] Jean-Christophe Kummer, Peter Kuhnle and Sebastian Gabler (2015) ‘Broadcast Archives: Between Productivity and Preservation’, IASA Journal, vol 44, 35.

[8] Kummer et al, ‘Broadcast Archives: Between Productivity and Preservation,’ 38.

[9] David Bull (2014) Communicating Pictures, Academic Press, 435-437.

[10] Av Preserve, A Primer on Codecs for Moving Image and Sound Archives, 2.

[11] For more reflections on compression, check out this fascinating talk from software theorist Alexander Galloway. The more practically bent can download and play with VISTRA, a video compression demonstrator developed at the University of Bristol ‘which provides an interactive overview of the some of the key principles of image and video compression.

[12] ‘FADGI, Digital File Formats for Videotape, 11.

[13] Presto Centre, AV Digitisation and Digital Preservation TechWatch Report #3, https://www.prestocentre.org/, 9.

[14] Presto Centre, AV Digitisation and Digital Preservation TechWatch Report #3, 10-11.

Posted by debra in digitisation expertise, video tape, 1 comment

DVCAM transfers, error correction coding & misaligned machines

This article is inspired by a collection of DVCAM tapes sent in by London-based cultural heritage organisation Sweet Patootee. Below we will explore several issues that arise from the transfer of DVCAM tapes, one of the many Digital Video formats that emerged in the mid-1990s. A second article will follow soon which focuses on the content of the Sweet Patootee archive, which is a fascinating collection of video-taped oral histories of 1 World War veterans from the Caribbean.

The main issue we want to explore below is the role error correction coding performs both in the composition of the digital video signal and during the preservation playback. We want to highlight this issue because it is often assumed that DVCAM, which first appeared on the market in 1996, is a fairly robust format.

The work we have done to transfer tapes to digital files indicates that error correction coding is working overdrive to ensure we can see and hear these recordings. The implication is that DVCAM collections, and wider DV-based archives, should really be a preservation priority for institutions, organisations and individuals.

Before we examine this in detail, let’s learn a bit about the technical aspects of error correction coding.

Error error error

DVFormat7 Error correction coding is a staple part of audio and audio-visual digital media. It is of great important in the digital world of today where the higher volume of transmitted signals require greater degrees of compression, and therefore sophisticated error correction schemes, as this article argues.

Error correction works through a process of prediction and calculation known as interpolation or concealment. It takes an estimation of the original recorded signal in order to re-construct parts of the data that have been corrupted. Corruption can occur due either to wear and tear, or insufficiencies in the original recorded signal.

Yet as Hugh Robjohns explains in the article ‘All About Digital Audio’ from 1998:

 ‘With any error protection system, if too many erroneous bits occur in the same sample, there is a risk of the error detection system failing, and in practice, most media failures (such as dropouts on tape or dirt on a CD), will result in a large chunk of data being lost, not just the odd data bit here and there. So a technique called interleaving is used to scatter data around the medium in such a way that if a large section is lost or damaged, when the data is reordered many smaller, manageable data losses are formed, which the detection and correction systems can hopefully deal with.’

There are many different types of error correction, and ‘like CD-ROMs, DV uses Reed-Solomon (RS) error detection and correction coding. RS can correct localised errors, but seldom can reconstruct data damaged by a dropout of significant size (burst error),’ explains this wonderfully detailed article about DV video formats archived on web archive.

The difference correction makes

Digital technology’s error correction is one of the key things that differentiate it from their analogue counterparts. As the IASA‘s Guidelines on the Production and Preservation of Digital Audio Objects (2009) explains:

‘Unlike copying analogue sound recordings, which results in inevitable loss of quality due to generational loss, different copying processes for digital recordings can have results ranging from degraded copies due to re-sampling or standards conversion, to identical “clones” which can be considered even better (due to error correction) than the original.’ (65)

To think that digital copies can, at times, exceed the quality of the original digital recording is both an astonishing and paradoxical proposition. After all we are talking about a recording that improves at the perceptual level, despite being compositionally damaged. It is important to remember that error correction coding cannot work miracles, and there are limits to what it can do.

Dietrich Schüller and Albrecht Häfner argue in the International Association of Sound and Audiovisual Archives’s (IASA) Handling and Storage of Audio and Video Carriers (2014) that ‘a perfect, almost error free recording leaves more correction capacity to compensate for handling and ageing effects and, therefore, enhances the life expectancy.’ If a recording is made however ‘with a high error rate, then there is little capacity left to compensate for further errors’ (28-29).

The bizarre thing about error-correction coding then is the appearance of clarity it can create. And if there are no other recordings to compare with the transferred file, it is really hard to know what the recorded signal is supposed to look and sound like were its errors not being corrected.

DVCAM PRO

When we watch the successfully migrated, error corrected file post-transfer, it matters little whether the original was damaged. If a clear signal is transmitted with high levels of error correction, the errors will not be transferred, only the clear image and sound.

Contrast this with a damaged analogue tape it would be clearly discernible on playback. The plus point of analogue tape is they do degrade gracefully: it is possible to play back an analogue tape recording with real physical deterioration and still get surprisingly good results.

Digital challenges

The big challenge working with any digital recordings on magnetic tape is to know when a tape is in poor condition prior to playback. Often tape will look fine and, because of error correction, will sound fine too until it stops working entirely.

How then did we know that the Sweet Patootee tapes were experiencing difficulties?

Professional DV machines such as our DVC PRO have a warning function that flashes when the error-correction coding is working at heightened levels. With our first attempt to play back the tapes we noticed that regular sections on most of the tapes could not be fixed by error correction.

The ingest software we use is designed to automatically retry sections of the tape with higher levels of data corruption until a signal can be retrieved. Imagine a process where a tape automatically goes through a playing-rewinding loop until the signal can be read. We were able to play back the tapes eventually, but the high level of error correction was concerning.

DVFormat6

As this diagram makes clear, around 25% of the recorded signal in DVCAM is composed of subcode data, error detection and error correction.

DVCAM & Mis-alignment

It is not just the over-active error correction on DVCAMs that should send the alarm bells ringing.

Alan Griffiths from Bristol Broadcast Engineering, a trained SONY engineer with over 40 years experience working in the television industry, told us that early DVCAM machines pose particular preservation challenges. The main problem here is that the ‘mechanisms are completely different’ for earlier DVCAM machines which means that there is ‘no guarantee’ they will play back effectively on later models.

Recordings made on early DVCAM machines exhibit back tensions problems and tracking issues. This increases the likelihood of DV dropout on playback because a loss of information was recorded onto the original tape. The IASA confirm that ‘misalignment of recording equipment leads to recording imperfections, which can take manifold form. While many of them are not or hardly correctable, some of them can objectively be detected and compensated for.’

One possible solution to this problem, as with DAT tapes, is to ‘misalign’ the replay digital video tape recorder to match the misaligned recordings. However ‘adjustment of magnetic digital replay equipment to match misaligned recordings requires high levels of engineering expertise and equipment’ (2009; 72), and must therefore not be ‘tried at home,’ so to speak.

Our experience with the Sweet Patootee tapes indicates that DVCAM tapes are a more fragile format than is commonly thought, particularly if your DVCAM collection was recorded on early machines. If you have a large collection of DVCAM tapes we strongly recommend that you begin to assess the contents and make plans to transfer them to digital files. As always, do get in touch if you need any advice to develop your plans for migration and preservation.

 

Posted by debra in digitisation expertise, video tape, 0 comments

Transferring Digital Audio Tapes (DATs) to digital audio files

At Greatbear, we carefully restore and transfer to digital file all types of content recorded to Digital Audio Tape (DAT), and can support all sample rate and bit depth variations.

This post focuses on some of the problems that can arise with the transfer of DATs.

An immature recording method (digital) on a mature recording format (magnetic tape), the audio digital recording revolution was never going to get it right first time (although DATs were not of course the first digital recordings made on tape).

Indeed, at a meeting of audio archivists held in 1995, there was a consensus even then that DAT was not, and would never be, a reliable archival medium. One participant stated: ‘we have tapes from 1949 that sound wonderful,’ and ‘we have tapes from 1989 that are shot to hell.’ And that was nearly twenty years ago! What chances do the tapes have now?

A little DAT history

Before we explore that, let’s have a little DAT history.

SONY introduced Digital Audio Tapes (DATs) in 1987. At roughly half the size of an analogue cassette tape, DAT has the ability to record at higher, equal or lower sampling rates than a CD (48, 44.1 or 32 kHz sampling rate respectively) at 16 bit quantization.

Although popular in Japan, DATs were never widely adopted by the majority of consumer market because they were more expensive than their analogue counterparts. They were however embraced in professional recording contexts, and in particular for recording live sound.

It was recording industry paranoia, particularly in the US, that really sealed the fate of the format. With its threatening promise of perfect replication, DAT tapes were subject to an unsuccessful lobbying campaign by the Recording Industry Association of America (RIAA). RIAA saw DATs as the ultimate attack on copyright law and pressed to introduce the Digital Audio Recorder Copycode Act of 1987.

This law recommended that each DAT machine had a ‘copycode’ chip installed that could detect whether prerecorded copyrighted music was being replicated. The method employed a notch filter that would subtly distort the quality of the copied recording, thus sabotaging acts of piracy tacitly enabled by the DAT medium. The law was however not passed, and compromises were made, although the US Audio Home Recording Act of 1992 imposed taxes on DAT machines and blank media.

How did they do ‘dat?

Like video tape recorders, DAT tapes use a rotating head and helical scan method to record data. The helical scan can, however, pose real problems for the preservation transfers of DAT tapes because it makes it difficult to splice the tape together if it becomes sticky and snaps during the tape wind. With analogue audiotape, which records information longitudinally, it is far more possible to splice the tape together and continue the transfer without risking irrevocable information loss.

Another problem posed by the helical scan method is that such tapes are more vulnerable to tape pack and backing deformation, as the CLIR guide explain:

‘Tracks are recorded diagonally on a helical scan tape at small scan angles. When the dimensions of the backing change disproportionately, the track angle will change for a helical scan recording. The scan angle for the record/playback head is fixed. If the angle that the recorded tracks make to the edge of the tape do not correspond with the scan angle of the head, mistracking and information loss can occur.’

When error correction can’t correct anymore

dat-mute-playback-condition-sony-7040 Most people will be familiar with the sound of digital audio dropouts even if they don’t know the science behind them. You will know them most probably as those horrible clicking noises produced when the error correction technology on CDs stops working. The clicks indicate that the ‘threshold of intelligibility’ for digital data has been breached and, as theorist Jonathan Sterne reminds us, ‘once their decay becomes palpable, the file is rendered entirely unreadable.’

Our SONY PCM 7030 professional DAT machine, pictured opposite, has a ‘playback condition’ light that flashes if an error is present. On sections of the tape where quality is really bad the ‘mute’ light can flash to indicate that the error correction technology can’t fix the problem. In such cases drop outs are very audible. Most DAT machines did not have such a facility however, and you only knew there was a problem when you heard the glitchy-clickety-crackle during playback when, of course, it was too late do anything about it.

The bad news for people with large, yet to be migrated DAT archives is that the format is ‘particularly susceptible to dropout. Digital audio dropout is caused by a non-uniform magnetic surface, or a malfunctioning tape deck. However, because the magnetically recorded information is in binary code, it results in a momentary loss of data and can produce a loud transient click or worse, muted audio, if the error correction scheme in the playback equipment cannot correct the error,’ the wonderfully informative A/V Artifact Atlas explains.

Given the high density nature of digital recordings on narrow magnetic tape, even the smallest speck of dust can cause digital audio dropouts. Such errors can be very difficult to eliminate. Cleaning playback heads and re-transferring is an option, but if the dropout was recorded at the source or the surface of tape is damaged, then the only way to treat irregularities is through applying audio restoration technologies, which may present a problem if you are concerned with maintaining the authenticity of the original recording.

Listen to this example of what a faulty DAT sounds like

Play back problems and mouldy DATs

Mould growth on the surface of DAT tape

Mould growth on the surface of DAT tape

A big problem with DAT transfers is actually being able to play back the tapes, or what is known in the business as ‘DAT compatibility.’ In an ideal world, to get the most perfect transfer you would play back a tape on the same machine that it was originally recorded on. The chances of doing this are of course pretty slim. While you can play your average audio cassette tape on pretty much any tape machine, the same cannot be said for DAT tapes. Often recordings were made on misaligned machines. The only solution for playback is, Richard Hess suggests, to mis-adjust a working machine to match the alignment of the recording on the tape.

As with any archival collection, if it is not stored in appropriate conditions then mould growth can develop. As mentioned above, DAT tapes are roughly half the size of the common audiocassette and the tape is thin and narrow. This makes them difficult to clean because they are mechanically fragile. Adapting a machine specifically for the purposes of cleaning, as we have done with our Studer machine, would be the most ideal solution. There is, however, not a massive amount of research and information about restoring mouldy DATs available online even though we are seeing more and more DAT tapes exhibiting this problem.

As with much of the work we do, the recommendation is to migrate your collections to digital files as soon as possible. But often it is a matter of priorities and budgets. From a technical point of view, DATs are a particularly vulnerable format. Machine obsolescence means that compared to their analogue counterparts, professional DAT machines will be increasingly hard to service in the long term. As detailed above, glitchy dropouts are almost inevitable given the sensitivity and all or nothing quality of digital data recorded on magnetic tape.

It seems fair to say that despite being meant to supersede analogue formats, DATs are far more likely to drop out of recorded sound history in a clinical and abrupt manner.

They therefore should be a high priority when decisions are made about which formats in your collection should be migrated to digital files immediately, over and above those that can wait just a little bit longer.

Posted by debra in audio tape, digitisation expertise, 9 comments

Open Source Solutions for Digital Preservation

In a technological world that is rapidly changing how can digital information remain accessible?

One answer to this question lies in the use of open source technologies. As a digital preservation strategy it makes little sense to use codecs owned by Mac or Windows to save data in the long term. Propriety software essentially operate like closed systems and risk compromising access to data in years to come.

Linux Operating System

It is vital, therefore, that the digitisation work we do at Great Bear is done within the wider context of digital preservation. This means making informed decisions about the hardware and software we use to migrate your tape-based media into digital formats. We use a mixture of propriety and open source software, simply because it makes our a bit life easier. Customers also ask us to deliver their files in propriety formats. For example, Apple pro res is a really popular codec that doesn’t take up a lot of data space so our customers often request this, and of course we are happy to provide it.

Using open systems definitely has benefits. The flexibility of Linux, for example, enables us to customise our digitisation system according to what we need to do. As with the rest of our work, we are keen to find ways to keep using old technologies if they work well, rather than simply throwing things away when shiny new devices come on the market. There is the misconception that to ingest vast amounts of audio data you need the latest hardware. All you need in fact is a big hard drive, flexible, yet reliable, software and an operating system that doesn’t crash so it can be left to ingest for 8 hours or more. Simple! Examples of open source software we use is the sound processing programme SoX. This saves us a lot of time because we are able to write scripts for the programme that can be used to batch process audio data according to project specifications.

Openness in the digital preservation world

Within the wider digital preservation world open source technologies are also used widely. From digital preservation tools developed by projects such as SCAPE and the Open Planets Foundation, there are plenty of software resources available for individuals and organisations who need to manage their digital assets. It would be naïve, however, to assume that the practice of openness here, and in other realms of the information economy, are born from the same techno-utopian impulse that propelled the open software movement from the 1970s onwards. The SCAPE website makes it clear that the development of open source information preservation tools are ‘the best approach given the substantial public investment made at the European and national levels, and because it is the most effective way to encourage commercial growth.’

What would make projects like SCAPE and Open Planets even better is if they thought about ways to engage non-specialist users who may be curious about digital preservation tools but have little experience of navigating complex software. The tools may well be open, but the knowledge of how to use them are not.

Openness, as a means of widening access to technical skills and knowledge, is the impulse behind the AV Artifact Atlas (AVAA), an initiative developed in conjunction with the community media archive project Bay Area Video Coalition. In a recent interview on the Library of Congress’ Digital Preservation Blog, Hannah Frost, Digital Library Services Manager at Stanford Libraries and Manager, Stanford Media Preservation Lab explains the idea behind the AVAA.

‘The problem is most archivists, curators and conservators involved in media reformatting are ill-equipped to detect artifacts, or further still to understand their cause and ensure a high quality job. They typically don’t have deep training or practical experience working with legacy media. After all, why should we? This knowledge is by and large the expertise of video and audio engineers and is increasingly rare as the analogue generation ages, retires and passes on. Over the years, engineers sometimes have used different words or imprecise language to describe the same thing, making the technical terminology even more intimidating or inaccessible to the uninitiated. We need a way capture and codify this information into something broadly useful. Preserving archival audiovisual media is a major challenge facing libraries, archives and museums today and it will challenge us for some time. We need all the legs up we can get.’

The promise of openness can be a fraught terrain. In some respects we are caught between a hyper-networked reality, where ideas, information and tools are shared openly at a lightning pace. There is the expectation that we can have whatever we want, when we want it, which is usually now. On the other side of openness are questions of ownership and regulation – who controls information, and to what ends?

Perhaps the emphasis placed on the value of information within this context will ultimately benefit digital archives, because there will be significant investment, as there already has been, in the development of open resources that will help to take care of digital information in the long term.

Posted by debra in audio tape, digitisation expertise, video tape, 0 comments

Early digital tape recordings on PCM/ U-matic and Betamax video tape

We are now used to living in a born-digital environment, but the transition from analogue to digital technologies did not happen overnight. In the late 1970s, early digital audio recordings were made possible by a hybrid analogue/digital system. It was composed by the humble transport and recording mechanisms of the video tape machine, and a not so humble PCM (pulse-code-modulation) digital processor. Together they created the first two-channel stereo digital recording system.

Inside a Betamax Video Recorder

The first professional use digital processing machine, made by SONY, was the PCM 1600. It was introduced in 1978 and used a U-matic tape machine. Later models, the PCM 1610/ 1630, acted as the first standard for mastering audio CDs in the 1980s. SONY employee Toshitada Doi, whose impressive CV includes the development of the PCM adaptor, the Compact Disc and the CIRC error correction system, visited recording studios around the world in an effort to facilitate the professional adoption of PCM digital technologies. He was not however welcomed with open arms, as the SONY corp. website explains:

'Studio engineers were opposed to digital technology. They criticized digital technology on the grounds that it was more expensive than analogue technology and that it did not sound as soft or musical. Some people in the recording industry actually formed a group called MAD (Musicians Against Digital), and they declared their position to the Audio Engineering Society (AES).'

Several consumer/ semi-professional models were marketed by SONY in the 70s and 80s, starting with the PCM-1 (1977). In a retro-review of the PCM-F10 (1981), Dr Frederick J. Bashour explains that

'older model VCRs often worked better than newer ones since the digital signal, as seen by the VCR, was a monochrome pattern of bars and dots; the presence of modern colour tweaking and image compensation circuits often reduced the recording system's reliability and, if possible, were turned off.'

Why did the evolution of an emerging digital technology stand on the shoulders of what had, by 1981, become a relatively mature analogue technology? It all comes down to the issue of bandwidth. A high quality PCM audio recording required 1-1.5 MHz bandwidth, which is far greater than a conventional analogue audio signal (15-20KHz). While this bandwidth was beyond the scope of analogue recording technology of the time, video tape recorders did have the capacity to record signals with higher bandwidths.

If you have ever wondered where the 16 bit/ 44 Khz sampling standard for the CD came from, it was because in the early 1980s, when the CD standard was agreed, there was no other practical way of storing digital sound than by a PCM Converter & video recorder combination. As the wikipedia entry for the PCM adaptor explains, 'the sampling frequencies of 44.1 and 44.056 kHz were thus the result of a need for compatibility with the 25-frame (CCIR 625/50 countries) and 30-frame black and white (EIAN 525/60 countries) video formats used for audio storage at the time.' The sampling rate was adopted as the standard for CDs and, unlike many other things in our rapidly changing technological world, it hasn't changed since.

The fusion of digital and analogue technologies did not last long, and the introduction of DAT tapes in 1987 rendered the PCM digital converters/ video tape system largely obsolete. DAT recorders basically did the same job as PCM/ video but came in one, significantly smaller, machine. DAT machines had the added advantage of being able to accept multiple sampling rates (the standard 44.1 kHz, as well as 48kHz, and 32kHz, all at 16 bits per sample, and a special LP recording mode using 12 bits per sample at 32 kHz for extended recording time).

Problems with migrating early digital tape recordings

There will always be the risk with any kind of magnetic tape recordings that there won't be enough working tape machines to playback the material recorded on them in the future. As spare parts become harder to source, tapes with worn out transport mechanisms will simply become inoperable. We are not quite at this stage yet, and at Greatbear we have plenty of working U-matic, Betamax and VHS machines so don't worry too much! Machine obsolescence is however a real threat facing tape-based archives.

Such a problem comes into sharp relief when we consider the case of digital audio recordings made on analogue video tape machines. Audio recordings 'work' the tape transport in a far more vigorous fashion than your average domestic video tape user. It may be rewound and fast-forwarded more often, and in a professional environment may be in constant use, thus leading to greater wear and tear.

Those who chose to adopt digital early and made recordings on tape will have marvelled at the lovely clean recordings and the wonders of error correction technology. As a legacy format however, tape-based digital recordings are arguably more at risk than their analogue counterparts. They are doubly compromised by fragility of tape, and the particular problems that befall digital technologies when things go wrong.

Example of edge damage on a video tape 'Edge damage' is very common in video tape and can happen when the tape transport becomes worn. This can alter the alignments of transport mechanism, leading it to move move up and down and crush the tape. As you can see in this photograph the edge of this tape has become damaged.

Because it is a digital recording, this has led to substantial problems with the transfer, namely that large sections of the recording simply 'drop out.' In instances such as these, where the tape itself has been damaged, analogue recordings on tape are infinitely more recoverable than digital ones. Dr W.C. John Van Bogart explains that

'even in instances of severe tape degradation, where sound or video quality is severely compromised by tape squealing or a high rate of dropouts, some portion of the original recording will still be perceptible. A digitally recorded tape will show little, if any, deterioration in quality up to the time of catastrophic failure when large sections of recorded information will be completely missing. None of the original material will be detectable in these missing sections.'

This risk of catastrophic, as opposed to gradual loss of information on tape based digital media, is what makes these recordings particularly fragile and at risk. What is particularly worrying about digital tape recordings is they may not show any external signs of damage until it is too late. We therefore encourage individuals, recording studios and memory institutions to assess the condition of their digital tape collections and take prompt action if the recorded information is valuable.

 The story of PCM digital processors and analogue tapes gives us a fascinating window into a time when we were not quite analogue, but not quite digital either, demonstrating how technologies co-evolve using the capacities of what is available in order to create something new.

For our PCM audio on video tape transfer services please follow this link: greatbear - PCM audio on video tape

Posted by debra in audio tape, digitisation expertise, 4 comments

Measuring signals – challenges for the digitisation of sound and video

In a 2012 report entitled ‘Preserving Sound and Moving Pictures’ for the Digital Preservation Coalition’s Technology Watch Report series, Richard Wright outlines the unique challenges involved in digitising audio and audiovisual material. ‘Preserving the quality of the digitized signal’ across a range of migration processes that can negotiate ‘cycles of lossy encoding, decoding and reformatting is one major digital preservation challenge for audiovisual files’ (1).

Wright highlights a key issue: understanding how data changes as it is played back, or moved from location to location, is important for thinking about digitisation as a long term project. When data is encoded, decoded or reformatted it alters shape, therefore potentially leading to a compromise in quality. This is a technical way of describing how elements of a data object are added to, taken away or otherwise transformed when they are played back across a range of systems and software that are different from the original data object.

Time-Based-Corrector

To think about this in terms which will be familiar to people today, imagine converting an uncompressed WAV into an MP3 file. You then burn your MP3s onto a CD as a WAV file so it will play back on your friend’s CD player. The WAV file you started off with is not the same as the WAV file you end up with – its been squished and squashed, and in terms of data storage, is far smaller. While smaller file size may be a bonus, the loss of quality isn’t. But this is what happens when files are encoded, decoded and reformatted.

Subjecting data to multiple layers of encoding and decoding does not only apply to digital data. Take Betacam video for instance, a component analogue video format introduced by SONY in 1982. If your video was played back using composite output, the circuity within the Betacam video machine would have needed to encode it. The difference may have looked subtle, and you may not have even noticed any change, but the structure of the signal would be altered in a ‘lossy’ way and can not be recovered to it’s original form. The encoding of a component signal, which is split into two or more channels, to a composite signal, which essentially squashes the channels together, is comparable to the lossy compression applied to digital formats such as mp3 audio, mpeg2 video, etc.

UMatic-Time-Based-Corrector

A central part of the work we do at Greatbear is to understand the changes that may have occurred to the signal over time, and try to minimise further losses in the digitisation process. We use a range of specialist equipment so we can carefully measure the quality of the analogue signal, including external time based correctors and wave form monitors. We also make educated decisions about which machine to play back tapes in line with what we expect the original recording was made on.

If we take for granted that any kind of data file, whether analogue or digital, will have been altered in its lifetime in some way, either through changes to the signal, file structure or because of poor storage, an important question arises from an archival point of view. What do we do with the quality of the data customers send us to digitise? If the signal of a video tape is fuzzy, should we try to stabilise the image? If there is hiss and other forms of noise on tape, should we reduce it? Should we apply the same conservation values to audio and film as we do to historic buildings, such as ruins, or great works of art? Should we practice minimal intervention, use appropriate materials and methods that aim to be reversible, while ensuring that full documentation of all work undertaken is made, creating a trail of endless metadata as we go along?

Do we need to preserve the ways magnetic tape, optical media and digital files degrade and deteriorate over time, or are the rules different for media objects that store information which is not necessarily exclusive to them (the same recording can be played back on a vinyl record, a cassette tape, a CD player, an 8 track cartridge or a MP3 file, for example)? Or should we ensure that we can hear and see clearly, and risk altering the original recording so we can watch a digitised VHS on a flat screen HD television, in line with our current expectations of media quality?

Time-Based-Correctors

Richard Wright suggests it is the data, rather than operating facility, which is the important thing about the digital preservation of audio and audiovisual media.

‘These patterns (for film) and signals (for video and audio) are more like data than like artefacts. The preservation requirement is not to keep the original recording media, but to keep the data, the information, recovered from that media’ (3).

Yet it is not always easy to understand what parts of the data should be discarded, and which parts should kept. Audiovisual and audio data are a production of both form and content, and it is worth taking care over the practices we use to preserve our collections in case we overlook the significance of this point and lose something valuable – culturally, historically and technologically.

Posted by debra in audio tape, digitisation expertise, video tape, 0 comments

Copying U-matic tape: digitise via dub connector or composite video?

umatic dub to y/c converter detail

Digitising legacy and obsolete video formats in essence is simple but the technical details make the process more complex. Experience and knowledge are therefore needed to make the most appropriate choices for the medium.

The U-matic video format usually had two types of video output, composite and a y/c type connector that Sony named ‘Dub’. Originally designed as a higher quality method to make analogue ‘dubs’, or for connections in an edit suite, the Dub connector offers a higher performance signal path for the video signal.

It would make sense to use the higher quality dub output when digitising U-matic tapes but here lies the problem. Firstly the connector uses the larger 7 pin y/c type connector that can be quite hard to find connectors for.

Secondly and most significantly, the chrominance subcarrier frequency is not the standard PAL  4.43Mhz but down converted by U-matic recorders to 0.686Mhz for low band recordings and 0.984Mhz for high band recordings.

What this means in practice is that you’ll only get a monochrome image using the U-matic dub connector unless you can find a way to convert the chroma subcarrier frequency back to 4.43Mhz.

There are several solutions:

  1. Convert this Dub signal chroma frequency using one of a few older Timebase Correctors / Frame Synchronisers  from the U-matic era.
    These are now rare and often have other other faults that would degrade the signal.
  2. Take the Luma and Chroma signals at the correct frequency directly from certain test points on the circuit boards inside the machines.
    This can work well but is a slightly ‘messy’ solution and makes it hard to swap machines around, which is a necessity with older hardware.
  3. Convert the dub signal using a dedicated external dub – y/c converter circuit.
    This is our preferred solution that works well technically. It is flexible enough to swap around to different machines easily. It is also a relatively simple circuit that is easy to repair and doesn’t subject the video signal to  unnecessary extra processing.

Below are two stills taken from a Apple ProRes recording from a Low Band PAL U-matic tape.
The first image is via the Dub connecter but converted to PAL Y/C.
The second images is via the Composite video out.

kieran prendiville umatic screenshot dub connector

 

kieran prendiville umatic screenshot composite video connector

It’s clear from the images that there is more fine detail in the picture from the U-matic Dub version. The pattern / texture in the jacket and the texture and tone in the face is more detailed.  In contrast, the version digitised through the Composite video connector has less noise but due to the extra encoding and decoding there is less detail and more ‘blurring’.

While less noise may be preferable in some instances, having the option to choose between these two is always better. It’s this kind of attention detail and investment in equipment and knowledge that we are proud of and makes us a preferred supplier of digitising services for U-matic video tape.

Posted by greatbear in digitisation expertise, video tape, 12 comments

Delivery formats – to compress or not compress

Screenshot of software encoding a file to MP3 used at the Great Bear

After we have migrated your analogue or digital tape to a digital file, we offer a range of delivery formats.

For video, using the International Association of Sound & Audiovisual Archives Guidelines for the Preservation of Video Recordings, as our guide, we deliver FFV1 lossless files or 10-bit uncompressed video files in .mkv or QuickTime compatible .mov containers. We add viewing files as H264 encoded .mp4 files or DVD. We’ll also produce any other digital video files, according to your needs, such as AVI in any codec; any MacOS, Windows or GNU/Linux filesystem (HFS+, NTFS or EXT3.

For audio we offer Broadcast WAV (B-WAV) files on hard drive or optical media (CD) at 16 bit/44.1 kHz (commonly used for CDs) or 24 bit/96 kHz (which is the minimum recommended archival standard) and anything up to 24 bit / 192 kHz. We can also deliver access copies on CD or MP3 (that you could upload to the internet, or listen to on an ipod, for example).

Why are there so many digital file types and what distinguishes them from each other?

The main difference that is important to grasp is between an uncompressed digital file and a compressed one.

On the JISC Digital Media website, they describe uncompressed audio files as follows:

‘Uncompressed audio files are the most accurate digital representation of a soundwave, but can also be the   most resource-intensive method of recording and storing digital audio, both in terms of storage and management. Their accuracy makes them suitable for archiving and delivering audio at high resolution, and working with audio at a professional level, and they are the “master” audio format of choice.’

Why uncompressed?

As a Greatbear client you may wonder why you need a large, uncompressed digital file if you only want to listen to your old analogue and digital tapes again. The simple answer is: we live in an age where information is dynamic rather static. An uncompressed digital recording captured at a high bit and kHz rate is the most stable media format you can store your data on. Technology is always changing and evolving, and not all types of digital files that are common today are safe from obsolescence.

It is important to consider questions of accessibility not only for the present moment, but also for the future. There may come a time when your digitised audio or video file needs to be migrated again, so that it can be played back on whatever device has become ‘the latest thing’ in a market driven by perpetual innovation. It is essential that you have access to the best quality digital file possible, should you need to transport your data in ten, fifteen or twenty years from now.

Compression and compromise?

Uncompressed digital files are sound and vision captured in their purest, ‘most accurate’ form. Parts of the original recording are not lost when the file is converted or saved. When a digital file is saved to a compressed, lossy format, some of its information is lost. Lossy compression eliminates ‘unnecessary’ bits of information, tailoring the file so that it is smaller. You can’t get the original file back after it has been compressed so you can’t use this sort of compression for anything that needs to be reproduced exactly. However it is possible to compress files to a lossless format, which does enable you to recreate the original file exactly.

In our day to day lives however we encounter far more compressed digital information than uncompressed.

There would be no HD TV, no satellite TV channels and no ipods/ MP3 players without compressed digital files. The main point of compression is to make these services affordable. It would be incredibly expensive, and it would take up so much data space, if the digital files that were streamed to televisions were uncompressed.

While compression is great for portability, it can result in a compromise on quality. As Simon Reynolds writes in his book Retromania: Pop Culture’s Addiction to its Own Past about MP3 files:

‘Every so often I’ll get the proper CD version of an album I’ve fallen in love with as a download, and I’ll get a rude shock when confronted by the sense of dimension and spatiality in the music’s layers, the sculpted force of the drums, the sheer vividness of the sound. The difference between CD and MP3 is similar to that between “not from concentrate” orange juice and juice that’s been reconstituted from concentrate. (In this analogy vinyl would be ‘freshly squeezed, perhaps). Converting music to MP3 is a bit like the concentration process, and its done for much the same reason: it’s much cheaper to transport concentrate because without the water it takes up a lot loss volume and it weighs a lot less. But we can all taste the difference.’

As a society we are slowly coming to terms with the double challenge of hyper consumption and conservation thrown up by the mainstreaming of digital technology. Part of that challenge is to understand what happens to the digital data we use when we click ‘save as,’ or knowing what decisions need to be made about data we want to keep because it is important to us as individuals, or to wider society.

At Greatbear we can deliver digital files in compressed and uncompressed formats, and are happy to offer a free consultation should you need it to decide what to do with your tape based digital and analogue media.

Posted by debra in audio tape, digitisation expertise, video tape, 0 comments

Convert, Join, re encode AVCHD .MTS files in Ubuntu Linux

convert, encode and join avchd files in linux

One of our audio and video archive customers has a large collection of AVCHD video files that are stored in 1.9GB ‘chunks’ as xxxxx.MTS files. All these files are of 60 minute and longer duration and must be joined, deinterlaced, re encoded to a suitable size and bitrate then uploaded for online access.

This is quite a task in computer time and file handling. These small domestic cameras produce good HD movies for a low cost but the compression to achieve this is very high and does not give you a file that is easily edited. The .MTS files are MPEG transport stream containers for H264 encoded video.

There are some proprietary solutions for MacOS X and Windows that will repackage the .MTS files into .MOV Quicktime containers that can be accessed by MacOS X or re-encoded to a less compressed format for editing with Final Cut Pro or Premiere. We didn’t need this though, just a reliable and  quick open source workflow.

  1. The first and most important issue is to rejoin the camera split files.
    These cameras use FAT32 file systems which cannot handle individual files larger than 2GB so they split the .MTS video file into chunks. As each chunk in a continuous sequence references the other chunks these must be joined in the correct order. This is easily achieved with the cat command.
  2. The rejoined .MTS files can now be reencoded to a more manageable size using open source software such as Handbrake. We also needed to deinterlace our footage as it was shot interlaced and it would be accessed on progressive displays. This will increase the encoding time but without it any movement will look odd with visible artifacts.
  3. Finding the ‘sweet spot’ for encoding can be time consuming but in this case was important as projected text needed to be legible but the file sizes kept manageable for reasonable upload times!

 

Posted by greatbear in digitisation expertise, video tape, 0 comments

Audio data recovery from external USB drive using ddrescue

High resolution audio and video digital tape conversions can use large amounts of computer storage. 8 bit uncompressed Standard Definition (SD) PAL video runs at 70 GB per hour and 24 bit 96 kHz audio files at 2 GB per hour.

As a result of this many of our analogue to digital tape transfers require the use of external storage, usually USB 2.0 portable hard drives, to supply the copied digital transfers back to the customer. Some drives supplied by customers have not been of great quality and not designed to be sent about in the post. One such drive we had recently, a Sony Vaio branded 2.5″ USB drive wouldn’t copy certain directories of important files with the Mac OS Finder or the Windows Explorer. While most of the drive copied this certain folder always resulted in a crashed computer!

Thanks to GNU/Linux we have a bit more power and information at our disposal about hard drives and IDE or USB interfaces. It’s always best practice to copy as much information from the drive or mirror it before attempting any other types of data recovery or file system repair. Using the standard dd

Posted by greatbear in digitisation expertise, 0 comments