Katana VentraIP

Closed captioning

Closed captioning (CC) and subtitling are both processes of displaying text on a television, video screen, or other visual display to provide additional or interpretive information. Both are typically used as a transcription of the audio portion of a program as it occurs (either verbatim or in edited form), sometimes including descriptions of non-speech elements. Other uses have included providing a textual alternative language translation of a presentation's primary audio language that is usually burned-in (or "open") to the video and unselectable.

HTML5 defines subtitles as a "transcription or translation of the dialogue when sound is available but not understood" by the viewer (for example, dialogue in a foreign language) and captions as a "transcription or translation of the dialogue, sound effects, relevant musical cues, and other relevant audio information when sound is unavailable or not clearly audible" (for example, when audio is muted or the viewer is deaf or hard of hearing).[1]

Terminology[edit]

The term closed indicates that the captions are not visible until activated by the viewer, usually via the remote control or menu option. On the other hand, the terms open, burned-in, baked on, hard-coded, or simply hard indicate that the captions are visible to all viewers as they are embedded in the video.


In the United States and Canada, the terms subtitles and captions have different meanings. Subtitles assume the viewer can hear but cannot understand the language or accent, or the speech is not entirely clear, so they transcribe only dialogue and some on-screen text. Captions aim to describe to the deaf and hard of hearing all significant audio content—spoken dialogue and non-speech information such as the identity of speakers and, occasionally, their manner of speaking—along with any significant music or sound effects using words or symbols. Also, the term closed caption has come to be used to also refer to the North American EIA-608 encoding that is used with NTSC-compatible video.


The United Kingdom, Ireland, and a number of other countries do not distinguish between subtitles and captions and use subtitles as the general term. The equivalent of captioning is usually referred to as subtitles for the hard of hearing. Their presence is referenced on screen by notation which says "Subtitles", or previously "Subtitles 888" or just "888" (the latter two are in reference to the conventional videotext channel for captions), which is why the term subtitle is also used to refer to the Ceefax-based videotext encoding that is used with PAL-compatible video. The term subtitle has been replaced with caption in a number of markets—such as Australia and New Zealand—that purchase large amounts of imported US material, with much of that video having had the US CC logo already superimposed over the start of it. In New Zealand, broadcasters superimpose an ear logo with a line through it that represents subtitles for the hard of hearing, even though they are currently referred to as captions. In the UK, modern digital television services have subtitles for the majority of programs, so it is no longer necessary to highlight which have subtitling/captioning and which do not.


Remote control handsets for TVs, DVDs, and similar devices in most European markets often use "SUB" or "SUBTITLE" on the button used to control the display of subtitles/captions.

History[edit]

Open captioning[edit]

Regular open-captioned broadcasts began on PBS's The French Chef in 1972.[2] WGBH began open captioning of the programs Zoom, ABC World News Tonight, and Once Upon a Classic shortly thereafter.

Technical development of closed captioning[edit]

Closed captioning was first demonstrated in the United States at the First National Conference on Television for the Hearing Impaired at the University of Tennessee in Knoxville, Tennessee, in December 1971.[2] A second demonstration of closed captioning was held at Gallaudet College (now Gallaudet University) on February 15, 1972, where ABC and the National Bureau of Standards demonstrated closed captions embedded within a normal broadcast of The Mod Squad. At the same time in the UK the BBC was demonstrating its Ceefax text based broadcast service which they were already using as a foundation to the development of a closed caption production system. They were working with professor Alan Newell from the University of Southampton who had been developing prototypes in the late 1960s.


The closed captioning system was successfully encoded and broadcast in 1973 with the cooperation of PBS station WETA.[2] As a result of these tests, the FCC in 1976 set aside line 21 for the transmission of closed captions. PBS engineers then developed the caption editing consoles that would be used to caption prerecorded programs.


The BBC in the UK was the first broadcaster to include closed captions (called subtitles in the UK) in 1979 based on the Teletext framework for pre-recorded programming.


Real-time captioning, a process for captioning live broadcasts, was developed by the National Captioning Institute in 1982.[2] In real-time captioning, stenotype operators who are able to type at speeds of over 225 words per minute provide captions for live television programs, allowing the viewer to see the captions within two to three seconds of the words being spoken.


Major US producers of captions are WGBH-TV, VITAC, CaptionMax and the National Captioning Institute. In the UK and Australasia, Ai-Media, Red Bee Media, itfc, and Independent Media Support are the major vendors.


Improvements in speech recognition technology means that live captioning may be fully or partially automated. BBC Sport broadcasts use a "respeaker": a trained human who repeats the running commentary (with careful enunciation and some simplification and markup) for input to the automated text generation system. This is generally reliable, though errors are not unknown.[3]

Full-scale closed captioning[edit]

The National Captioning Institute was created in 1979 in order to get the cooperation of the commercial television networks.[2]


The first use of regularly scheduled closed captioning on American television occurred on March 16, 1980.[4] Sears had developed and sold the Telecaption adapter, a decoding unit that could be connected to a standard television set. The first programs seen with captioning were a Disney's Wonderful World presentation of the film Son of Flubber on NBC, an ABC Sunday Night Movie airing of Semi-Tough, and Masterpiece Theatre on PBS.[5]


Since 2010 BBC provides a 100% broadcast captioning service across all 7 of its main broadcast channels BBC One, BBC Two, BBC Three, BBC Four, CBBC, CBeebies and BBC News.


BBC iPlayer launched in 2008 as the first captioned Video on demand service from a major broadcaster meeting comparable levels of captioning as those provided on its broadcast channels.

Legislative development in the U.S.[edit]

Until the passage of the Television Decoder Circuitry Act of 1990, television captioning was performed by a set-top box manufactured by Sanyo Electric and marketed by the National Captioning Institute (NCI). (At that time a set-top decoder cost about as much as a TV set itself, approximately $200.) Through discussions with the manufacturer it was established that the appropriate circuitry integrated into the television set would be less expensive than the stand-alone box, and Ronald May, then a Sanyo employee, provided the expert witness testimony on behalf of Sanyo and Gallaudet University in support of the passage of the bill. On January 23, 1991, the Television Decoder Circuitry Act of 1990 was passed by Congress.[2] This Act gave the Federal Communications Commission (FCC) power to enact rules on the implementation of closed captioning. This Act required all analog television receivers with screens of at least 13 inches or greater, either sold or manufactured, to have the ability to display closed captioning by July 1, 1993.[6]


Also, in 1990, the Americans with Disabilities Act (ADA) was passed to ensure equal opportunity for persons with disabilities.[2] The ADA prohibits discrimination against persons with disabilities in public accommodations or commercial facilities. Title III of the ADA requires that public facilities—such as hospitals, bars, shopping centers and museums (but not movie theaters)—provide access to verbal information on televisions, films or slide shows.


The Federal Communications Commission requires all providers of programs to caption material which has audio in English or Spanish, with certain exceptions specified in Section 79.1(d) of the commission's rules. These exceptions apply to new networks; programs in languages other than English or Spanish; networks having to spend over 2% of income on captioning; networks having less than US$3,000,000 in revenue; and certain local programs; among other exceptions.[7] Those who are not covered by the exceptions may apply for a hardship waiver.[8]


The Telecommunications Act of 1996 expanded on the Decoder Circuitry Act to place the same requirements on digital television receivers by July 1, 2002.[9] All TV programming distributors in the U.S. are required to provide closed captions for Spanish-language video programming as of January 1, 2010.[10]


A bill, H.R. 3101, the Twenty-First Century Communications and Video Accessibility Act of 2010, was passed by the United States House of Representatives in July 2010.[11] A similar bill, S. 3304, with the same name, was passed by the United States Senate on August 5, 2010, by the House of Representatives on September 28, 2010, and was signed by President Barack Obama on October 8, 2010. The Act requires, in part, for ATSC-decoding set-top box remotes to have a button to turn on or off the closed captioning in the output signal. It also requires broadcasters to provide captioning for television programs redistributed on the Internet.[12]


On February 20, 2014, the FCC unanimously approved the implementation of quality standards for closed captioning,[13] addressing accuracy, timing, completeness, and placement. This is the first time the FCC has addressed quality issues in captions.


In 2015, a law was passed in Hawaii requiring two screenings a week of each movie with captions on the screen. In 2022 a law took effect in New York City requiring movie theaters to offer captions on the screen for up to four showtimes per movie each week, including weekends and Friday nights.[14]


Some state and local governments (including Boston, Massachusetts; Portland, Oregon; Rochester, New York; and Seattle, Washington) require closed captioning to be activated on TVs in public places at all times, even if no one has requested it.[15]

Philippines[edit]

As amended by RA 10905, all TV networks in the Philippines are required to give CC.[16] As of 2018, the three major TV networks in the country are currently testing the closed captioning system on their transmissions. ABS-CBN added CC in their daily 3 O'Clock Habit in the afternoon. 5 started implementing CCs on their live noon and nightly news programs. GMA was once started broadcasting nightly and late night news programs, but then they stopped adding CCs lately. Only select Korean drama and local or foreign movies, Biyahe ni Drew (English: Drew's Explorations) and Idol sa Kusina (English: Kitchen Idol) are the programs and shows that they air with proper closed captioning.[17]


Closed captioning in some Filipino films either to be "included" if film production companies have a bias on having impact on their viewing experience for those who did not understand the language. Since 2016, all Filipino-Language Films and also on some Streaming Services like iWant had included their English Subtitles in some showing on films. The law regarding that was passed by Gerald Anthony Gullas Jr., a lawmaker from Cebu City, who had implemented the regulations on standardizing both official languages of the Philippines, as the people had not fluently mastered their English vocabulary.[18]

Legislative development in Australia[edit]

The government of Australia provided seed funding in 1981 for the establishment of the Australian Caption Centre (ACC) and the purchase of equipment. Captioning by the ACC commenced in 1982 and a further grant from the Australian government enabled the ACC to achieve and maintain financial self-sufficiency. The ACC, now known as Media Access Australia, sold its commercial captioning division to Red Bee Media in December 2005. Red Bee Media continues to provide captioning services in Australia today.[19][20][21]

Funding development in New Zealand[edit]

In 1981, TVNZ held a telethon to raise funds for Teletext-encoding equipment used for the creation and editing of text-based broadcast services for the deaf. The service came into use in 1984 with caption creation and importing paid for as part of the public broadcasting fee until the creation of the NZ on Air taxpayer fund, which is used to provide captioning for NZ On Air content, TVNZ news shows and conversion of EIA-608 US captions to the preferred EBU STL format for only TVNZ 1, TV 2 and TV 3 with archived captions available to FOUR and select Sky programming. During the second half of 2012, TV3 and FOUR began providing non-Teletext DVB image-based captions on their HD service and used the same format on the satellite service, which has since caused major timing issues in relation to server load and the loss of captions from most SD DVB-S receivers, such as the ones Sky Television provides their customers. As of April 2, 2013, only the Teletext page 801 caption service will remain in use with the informational Teletext non-caption content being discontinued.

Application[edit]

Closed captions were created for deaf and hard of hearing individuals to assist in comprehension. They can also be used as a tool by those learning to read, learning to speak a non-native language, or in an environment where the audio is difficult to hear or is intentionally muted. Captions can also be used by viewers who simply wish to read a transcript along with the program audio.


In the United States, the National Captioning Institute noted that English as a foreign or second language (ESL) learners were the largest group buying decoders in the late 1980s and early 1990s before built-in decoders became a standard feature of US television sets. This suggested that the largest audience of closed captioning was people whose native language was not English. In the United Kingdom, of 7.5 million people using TV subtitles (closed captioning), 6 million have no hearing impairment.[22]


Closed captions are also used in public environments, such as bars and restaurants, where patrons may not be able to hear over the background noise, or where multiple televisions are displaying different programs. In addition, online videos may be treated through digital processing of their audio content by various robotic algorithms (robots). Multiple chains of errors are the result. When a video is truly and accurately transcribed, then the closed-captioning publication serves a useful purpose, and the content is available for search engines to index and make available to users on the internet.[23][24][25]


Some television sets can be set to automatically turn captioning on when the volume is muted.

Roll-up or scroll-up or paint-on or scrolling: Real-time words sent in paint-on or scrolling mode appear from left to right, up to one line at a time; when a line is filled in roll-up mode, the whole line scrolls up to make way for a new line, and the line on top is erased. The lines usually appear at the bottom of the screen, but can actually be placed on any of the 14 screen rows to avoid covering graphics or action. This method is used when captioning video in real-time such as for live events, where a sequential word-by-word captioning process is needed or a pre-made intermediary file isn't available. This method is signaled on by a two-byte caption command or in Teletext by replacing rows for a roll-up effect and duplicating rows for a paint-on effect. This allows for real-time caption line editing.

EIA-608

An enhanced character set with more and non-Latin letters, and more special symbols

accented letters

Viewer-adjustable text size (called the "caption volume control" in the specification), allowing individuals to adjust their TVs to display small, normal, or large captions

More text and background colors, including both transparent and translucent backgrounds to optionally replace the big block

black

More text styles, including edged or text rather than the letters on a solid background

drop shadowed

More text fonts, including and proportional spaced, serif and sans-serif, and some playful cursive fonts

monospaced

Higher , to allow more data per minute of video

bandwidth

More language channels, to allow the encoding of more independent caption streams

Uses in other media[edit]

DVDs and Blu-ray Discs[edit]

NTSC DVDs may carry closed captions in data packets of the MPEG-2 video streams inside of the Video-TS folder. Once played out of the analog outputs of a set top DVD player, the caption data is converted to the Line 21 format.[37] They are output by the player to the composite video (or an available RF connector) for a connected TV's built-in decoder or a set-top decoder as usual. They can not be output on S-Video or component video outputs due to the lack of a colorburst signal on line 21. (Actually, regardless of this, if the DVD player is in interlaced rather than progressive mode, closed captioning will be displayed on the TV over component video input if the TV captioning is turned on and set to CC1.) When viewed on a personal computer, caption data can be viewed by software that can read and decode the caption data packets in the MPEG-2 streams of the DVD-Video disc. Windows Media Player (before Windows 7) in Vista supported only closed caption channels 1 and 2 (not 3 or 4). Apple's DVD Player does not have the ability to read and decode Line 21 caption data which are recorded on a DVD made from an over-the-air broadcast. It can display some movie DVD captions.


In addition to Line 21 closed captions, video DVDs may also carry subtitles, which generally rendered from the EIA-608 captions as a bitmap overlay that can be turned on and off via a set top DVD player or DVD player software, just like the textual captions. This type of captioning is usually carried in a subtitle track labeled either "English for the hearing impaired" or, more recently, "SDH" (subtitled for the deaf and Hard of hearing).[38] Many popular Hollywood DVD-Videos can carry both subtitles and closed captions (e.g. Stepmom DVD by Columbia Pictures). On some DVDs, the Line 21 captions may contain the same text as the subtitles; on others, only the Line 21 captions include the additional non-speech information (even sometimes song lyrics) needed for deaf and hard-of-hearing viewers. European Region 2 DVDs do not carry Line 21 captions, and instead list the subtitle languages available-English is often listed twice, one as the representation of the dialogue alone, and a second subtitle set which carries additional information for the deaf and hard-of-hearing audience. (Many deaf/HOH subtitle files on DVDs are reworkings of original teletext subtitle files.)


Blu-ray media typically cannot carry any VBI data such as Line 21 closed captioning due to the design of DVI-based High-Definition Multimedia Interface (HDMI) specifications that was only extended for synchronized digital audio replacing older analog standards, such as VGA, S-Video, component video, and SCART. However, a few early titles from 20th Century Fox Home Entertainment carried Line 21 closed captions that are output when using the analog outputs (typically composite video) of a few Blu-ray players. Both Blu-ray and DVD can use either PNG bitmap subtitles or 'advanced subtitles' to carry SDH type subtitling, the latter being an XML-based textual format which includes font, styling and positioning information as well as a unicode representation of the text. Advanced subtitling can also include additional media accessibility features such as "descriptive audio".

Movies[edit]

There are several competing technologies used to provide captioning for movies in theaters. Cinema captioning falls into the categories of open and closed. The definition of "closed" captioning in this context is different from television, as it refers to any technology that allows as few as one member of the audience to view the captions.


Open captioning in a film theater can be accomplished through burned-in captions, projected text or bitmaps, or (rarely) a display located above or below the movie screen. Typically, this display is a large LED sign. In a digital theater, open caption display capability is built into the digital projector. Closed caption capability is also available, with the ability for 3rd-party closed caption devices to plug into the digital cinema server.


Probably the best known closed captioning option for film theaters is the Rear Window Captioning System from the National Center for Accessible Media. Upon entering the theater, viewers requiring captions are given a panel of flat translucent glass or plastic on a gooseneck stalk, which can be mounted in front of the viewer's seat. In the back of the theater is an LED display that shows the captions in mirror image. The panel reflects captions for the viewer but is nearly invisible to surrounding patrons. The panel can be positioned so that the viewer watches the movie through the panel, and captions appear either on or near the movie image. A company called Cinematic Captioning Systems has a similar reflective system called Bounce Back. A major problem for distributors has been that these systems are each proprietary, and require separate distributions to the theater to enable them to work. Proprietary systems also incur license fees.


For film projection systems, Digital Theater Systems, the company behind the DTS surround sound standard, has created a digital captioning device called the DTS-CSS (Cinema Subtitling System). It is a combination of a laser projector which places the captioning (words, sounds) anywhere on the screen and a thin playback device with a CD that holds many languages. If the Rear Window Captioning System is used, the DTS-CSS player is also required for sending caption text to the Rear Window sign located in the rear of the theater.


Special effort has been made to build accessibility features into digital projection systems (see digital cinema). Through SMPTE, standards now exist that dictate how open and closed captions, as well as hearing-impaired and visually impaired narrative audio, are packaged with the rest of the digital movie. This eliminates the proprietary caption distributions required for film, and the associated royalties. SMPTE has also standardized the communication of closed caption content between the digital cinema server and 3rd-party closed caption systems (the CSP/RPL protocol). As a result, new, competitive closed caption systems for digital cinema are now emerging that will work with any standards-compliant digital cinema server. These newer closed caption devices include cupholder-mounted electronic displays and wireless glasses which display caption text in front of the wearer's eyes.[39] Bridge devices are also available to enable the use of Rear Window systems. As of mid-2010, the remaining challenge to the wide introduction of accessibility in digital cinema is the industry-wide transition to SMPTE DCP, the standardized packaging method for very high quality, secure distribution of digital movies.

Sports venues[edit]

Captioning systems have also been adopted by most major league and high-profile college stadiums and arenas, typically through dedicated portions of their main scoreboards or as part of balcony fascia LED boards. These screens display captions of the public address announcer and other spoken content, such as those contained within in-game segments, public service announcements, and lyrics of songs played in-stadium. In some facilities, these systems were added as a result of discrimination lawsuits. Following a lawsuit under the Americans with Disabilities Act, FedExField added caption screens in 2006.[40][41] Some stadiums utilize on-site captioners while others outsource them to external providers who caption remotely.[42][43]

Video games[edit]

The infrequent appearance of closed captioning in video games became a problem in the 1990s as games began to commonly feature voice tracks, which in some cases contained information which the player needed in order to know how to progress in the game.[44] Closed captioning of video games is becoming more common. One of the first video game companies to feature closed captioning was Bethesda Softworks in their 1990 release of Hockey League Simulator and The Terminator 2029. Infocom also offered Zork Grand Inquisitor in 1997.[45] Many games since then have at least offered subtitles for spoken dialog during cutscenes, and many include significant in-game dialog and sound effects in the captions as well; for example, with subtitles turned on in the Metal Gear Solid series of stealth games, not only are subtitles available during cut scenes, but any dialog spoken during real-time gameplay will be captioned as well, allowing players who can't hear the dialog to know what enemy guards are saying and when the main character has been detected. Also, in many of developer Valve's video games (such as Half-Life 2 or Left 4 Dead), when closed captions are activated, dialog and nearly all sound effects either made by the player or from other sources (e.g. gunfire, explosions) will be captioned.


Video games don't offer Line 21 captioning, decoded and displayed by the television itself but rather a built-in subtitle display, more akin to that of a DVD. The game systems themselves have no role in the captioning either; each game must have its subtitle display programmed individually.


Reid Kimball, a game designer who is hearing impaired, is attempting to educate game developers about closed captioning for games. Reid started the Games[CC] group to closed caption games and serve as a research and development team to aid the industry. Kimball designed the Dynamic Closed Captioning system, writes articles and speaks at developer conferences. Games[CC]'s first closed captioning project called Doom3[CC] was nominated for an award as Best Doom3 Mod of the Year for IGDA's Choice Awards 2006 show.

Online video streaming[edit]

Internet video streaming service YouTube offers captioning services in videos. The author of the video can upload a SubViewer (*.SUB), SubRip (*.SRT) or *.SBV file.[46] As a beta feature, the site also added the ability to automatically transcribe and generate captioning on videos, with varying degrees of success based upon the content of the video.[47] However, on August 30, 2020, the company announced that communal captions will end on September 28.[48] The automatic captioning is often inaccurate on videos with background music or exaggerated emotion in speaking. Variations in volume can also result in nonsensical machine-generated captions. Additional problems arise with strong accents, sarcasm, differing contexts, or homonyms.[49]


On June 30, 2010, YouTube announced a new "YouTube Ready" designation for professional caption vendors in the United States.[50] The initial list included twelve companies who passed a caption quality evaluation administered by the Described and Captioned Media Project, have a website and a YouTube channel where customers can learn more about their services and have agreed to post rates for the range of services that they offer for YouTube content.


Flash video also supports captions using the Distribution Exchange profile (DFXP) of W3C timed text format. The latest Flash authoring software adds free player skins and caption components that enable viewers to turn captions on/off during playback from a web page. Previous versions of Flash relied on the Captionate 3rd party component and skin to caption Flash video. Custom Flash players designed in Flex can be tailored to support the timed-text exchange profile, Captionate .XML, or SAMI file (e.g. Hulu captioning). This is the preferred method for most US broadcast and cable networks that are mandated by the U.S. Federal Communications Commission to provide captioned on-demand content. The media encoding firms generally use software such as MacCaption to convert EIA-608 captions to this format. The Silverlight Media Framework[51] also includes support for the timed-text exchange profile for both download and adaptive streaming media.


Windows Media Video can support closed captions for both video on demand streaming or live streaming scenarios. Typically, Windows Media captions support the SAMI file format but can also carry embedded closed caption data.


EBU-TT-D distribution format supports multiple players across multiple platforms.


QuickTime video supports raw EIA-608 caption data via proprietary closed caption track, which are just EIA-608 byte pairs wrapped in a QuickTime packet container with different IDs for both line 21 fields. These captions can be turned on and off and appear in the same style as TV closed captions, with all the standard formatting (pop-on, roll-up, paint-on), and can be positioned and split anywhere on the video screen. QuickTime closed caption tracks can be viewed in macOS or Windows versions of QuickTime Player, iTunes (via QuickTime), iPod Nano, iPod Classic, iPod Touch, iPhone, and iPad.

Theatre[edit]

Live plays can be open captioned by a captioner who displays lines from the script and including non-speech elements on a large display screen near the stage.[52] Software is also now available that automatically generates the captioning and streams the captioning to individuals sitting in the theater, with that captioning being viewed using heads-up glasses or on a smartphone or computer tablet.


Captioning on display screens or personal devices can be difficult to follow at the same time as the performance on stage. To solve this sproblem, Creative Captioning refers to the process of integrating captions into set design as part of the creative process.[53] Creative Captions are a visible part of the show: the set designer/captioner match(es) the style of the captions to the atmosphere of the show through characteristics such as animation, color, font, and size.[54]

[edit]

The current and most familiar logo for closed captioning consists of two Cs (for "closed captioned") inside a television screen. It was created at WGBH. The other logo, trademarked by the National Captioning Institute, is that of a simple geometric rendering of a television set merged with the tail of a speech balloon; two such versions exist – one with a tail on the left, the other with a tail on the right.[59]

Fansub

Same Language Subtitling

Sign language on television

(captioner), an occupation

Speech-to-text reporter

Subtitles

Surtitles

(SAMI) file format

Synchronized Accessible Media Interchange

(SMIL) file format

Synchronized Multimedia Integration Language

Realtime Captioning, the VITAC Way by Amy Bowlen and Kathy DiLorenzo (no ISBN)

(Archived 2019-10-20 at the Wayback Machine)

BBC Subtitles (Captions) Editorial Guidelines

Closed Captioning: Subtitling, Stenography, and the Digital Convergence of Text with Television by Gregory J. Downey ( 978-0-8018-8710-9)

ISBN

The Closed Captioning Handbook by (ISBN 0-240-80561-5)

Gary D. Robson

Alternative Realtime Careers: A Guide to Closed Captioning and CART for Court Reporters by (ISBN 1-881859-51-7)

Gary D. Robson

A New Civil Right: Telecommunications Equality for Deaf and Hard of Hearing Americans by Karen Peltz Strauss ( 978-1-56368-291-9)

ISBN

Enabling the Disabled by Michael Karagosian (no ISBN)

Parfitt, Ellie (15 November 2018). . Hearing Like Me.

"A deafie's guide to accessing captions anywhere!"

-From the Federal Communications Commission Consumer & Governmental Affairs Bureau

Closed Captioning of Video Programming - 47 C.F.R. 79.1

Archived 2010-08-10 at the Wayback Machine

FCC Consumer Facts on Closed Captioning

Alan Newell, Inventor of Closed Captioning, Teletext for the Deaf, 1982

at Curlie

Closed Captioning

Archived 2011-06-07 at the Wayback Machine-From the Education Resources Information Center Clearinghouse for ESL Literacy Education, Washington, D.C.

Closed Captioned TV: A Resource for ESL Literacy Education

Bill Kastner: The Man Behind Closed Captioning

First Sears Telecaption adapter advertised in 1980 Sears catalog

Archived 2022-07-15 at the Wayback Machine

BBC Best Practice Guidelines for Captioning and Subtitling (UK)

EBU-TT-D Subtitling (Captions) Distribution Format