Image

Event Review: Transcription, Captioning, and Subtitling: An Introduction for Editors

Written by Amy Haagsma; copy edited by Karen Barry

Recap of EAC-BC’s branch meeting on April 15, 2015.

Last month, Kelly Maxwell spoke to EAC members and guests about the fascinating world of transcription, captioning, and subtitling. Kelly is co-owner and co-founder of Vancouver-based Line 21 Media Services, which provides services to the television and film industry. Line 21 works primarily with post-production coordinators, who shepherd television shows and movies through editing, colour correction, and distribution.

Kelly started her career in 1991, working for one of five licensees of new captioning software that had been developed through a Canadian Heritage grant. She came to the company via her friend Carolyn, who was working there already and trained her. When the owner retired in 1994, he offered Kelly and Carolyn the opportunity to buy the business. Instead, they decided to venture out on their own, and Line 21 was born.

Although captioning was still very much in its infancy, demand increased steadily, due in part to the Americans with Disabilities Act and the Television Decoder Circuitry Act, both passed in 1990. The latter legislation required that all televisions distributed in the United States have closed captioning decoders, an important step in making captioning more widespread. The Canadian Radio-television and Telecommunications Commission (CRTC) followed suit, requiring broadcasters to make captioning available for their programs. More recent regulations have also introduced standards for captioning quality.

With captioning and subtitling, Kelly generally receives a script from her clients. Her first step is to take the material and clean it up. It’s important to get the words and the punctuation right, so that viewers understand instantly what they’re seeing on screen. Next, she breaks down the script into readable chunks and matches it to time codes in the media. This involves some “funky math”—in addition to hours, minutes, and seconds, the time codes include frames, and the number of frames per second varies depending on the project. Captions are then placed into the media using timed TIFF files.

Although the process is similar for both captioning and subtitling, there are some important differences. Subtitles are generally developed with a hearing audience in mind, which means that viewers will know when there is a change in voice, music, or sound effects. However, challenges can arise when working in a language you don’t know, particularly with languages that include ligatures or are read from right to left. In addition, the time needed to read the subtitle must match the amount of time the speaker is talking.

Line 21 also offers transcription services for situations in which no script exists. In one instance, Kelly was asked to transcribe a speech given by the Dalai Lama. Although the message was clear when spoken, Kelly noted that the Dalai Lama doesn’t use a lot of words and tends to omit articles and verbs. She elected to add some of the missing words to aid reading comprehension, but she used a very light touch so as not to impose her own style and voice.

Kelly also talked about some of the tools she uses, including InqScribe transcription software and Telestream’s CaptionMaker and MacCaption, and she shared a few memorable captioning missteps she’s come across over the years. The first was a still of Star Trek’s Spock, grasping a computer with a look of anguish on his face. The somewhat baffling caption reads “sobbing mathematically.” She also mentioned a documentary in which Long John Baldry is talking about being at Lindisfarne, which had been transcribed as “Lindis Farm,” and a show on bioengineering technology in which P-glycoprotein was interpreted as “picklocker protein.”

We also gleaned additional information through a question-and-answer session with Kelly, including the following:

  • Industry style guides exist but are quite loose, so Line 21 relies primarily on its house style.
  • Kelly aims to transcribe pretty much verbatim, rather than trying to meet a certain reading rate.
  • Captioners are not expected to capture speech that is indiscernible to a hearing audience, but something like an auctioneer speaking quickly should be transcribed.
  • Voice recognition software “isn’t there yet” as a viable alternative to transcription (and if you’re not convinced, look at some examples of YouTube’s automatic captioning feature).
  • Similarly, captioning software does not make you a skilled captioner, just as having a pencil does not turn you into Picasso.

And last, Kelly shared a bit of trivia about how the company name had been chosen: line 21 is the line in the television broadcast signal that is reserved for closed captioning.

Missed the meeting? Download the audio file (EAC members only).

Amy Haagsma is a communications professional and a graduate of SFU’s Editing Certificate program.

Karen Barry is launching into freelance editing and is currently enrolled in SFU’s Editing Certificate program. She has a background in biology and over 15 years’ experience writing and editing research papers, technical reports, grant proposals, and promotional and educational materials.

Image modified from freeimages

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s