Creating Inclusive Content

Creating accessible audio and video media

Accessible audio and video is essential for people with disabilities.

Depending on the content of your media, one or more of the following may be required:

  • Captions/subtitles - a text version of the audio that is shown synchronised in the media player), They are needed on any video with spoken dialogue, or where sound provides contextual meaning.
  • Transcript - a separate text version of the audio
  • Audio description of visual information - an additional audio stream that describes important visual content.

Understanding user needs

(source: https://www.w3.org/WAI/media/av/)

  • Many people who are Deaf can read text well. They get the audio information from transcripts or captions. Some people prefer sign language.
  • Some people who are hard of hearing like to listen to the audio to hear what they can, and have captions to fill in what they can't hear adequately.
  • Some people who have difficulty processing auditory information also use captions. Many use transcripts so they can read at their own pace.
  • Some people who are blind or have low vision can't see videos well or at all. They use audio description of visual information to understand what's going on visually.
  • Some people who are Deaf-blind use a screen reader and braille to read descriptive transcripts that include the audio and visual information as text.
  • Some people cannot focus and comprehend auditory or visual information when there are changing visuals. For most videos, they also need descriptive transcripts.
  • Some people cannot use their hands and use voice recognition software to operate their computer, including the media player. And people who are blind need the media player to work without a mouse.
  • Some people use multiple accessibility features simultaneously. For example, someone might want captions, description of visual information as text, and description in audio.

Adding accessibility features: captioning

Refer to the W3.org page on Planning Audio and Video Media to understand which accessibility aspects your audio or video needs.

Captioning Support

We have a responsibility to ensure all digital content is accessible. Four different types of captioning support are available for our students:

Captions

Captions are displays of text which depict speech in a video. They are typically located at the bottom of a video screen. The purpose of captions is to make video content accessible.

Captions can then be converted and downloaded into transcriptions. Transcription is the speech in a video or audio which is converted into a written, plain text document.

Captions and transcriptions are beneficial to students who have a disability. This could include:

  • D/deaf or are hard of hearing
  • specific learning difficulties such as dyslexia or Attention Deficit Hyperactivity Disorder (ADHD).

In ADHD, concentration and attention can be an area of difficulty. Captioning can help maintain concentration and transcription allows opportunities to recap to check learning.

Captions and transcriptions can also be useful for students without a disability, such as:

  • students who are watching the digital content in noisy environments
  • those that are in a quiet environment such as a library
  • those who have English as a second language (ESOL)

The four types of captioning that are available at the University of Greenwich are:

  1. Automatic speech recognition (ASR)

    This is available for all students.

    Every video/lecture recording uploaded onto Panopto has machine generated captioning available. This is automatic and is available within 15 minutes of the video being uploaded. Accuracy level is typically between 70-75% depending on the audio quality in the recording.

    Machine generated captioning is available on Microsoft TEAMS videos. Videos need to be uploaded on to Microsoft Stream and selected. Time taken to caption can vary and is dependent on the length of the video. Accuracy of the captioning can be checked and edited if needed.

  2. Accurate Caption via Moodle

    This is available for all students.

    Students are able to request improved captioning if they find that the ASR produced captioning for a specific recording has low accuracy.  A request button is available on Moodle, students click on this and complete a request form. The request then goes to a team of student staff who then manually caption to provide improved and 100% accuracy levels. One request is needed per recording.  The turnaround time for this is 1 week.

  3. Human Captioning via AI Media

    This is for students with a disability, such as D/deaf and hard of hearing or a specific learning difficulty that are registered with the Student Wellbeing Service and have a Greenwich Inclusion Plan in place.

    Working in collaboration with AI Media, 100% accurate human captioning is provided.

    Authorisation will need to be requested from and arranged by the Student Wellbeing Service: Disability and Dyslexia. Authorised academics can request human captioning via Panopto after a recording is uploaded. Turnaround time is 4 days.

  4. Live Captioning

    This is for students with a disability, such as D/deaf and hard of hearing or a specific learning difficulty that are registered with the Student Wellbeing Service and have a Greenwich Inclusion Plan in place.

    The university works in collaboration with AI Media to provide 100% accurate captioning for live sessions. Authorisation and access is arranged by the Student Wellbeing Service: Disability and Dyslexia with liaison with the student and AI Media.

    This is available instantly making live content instantly accessible.

    For further detail, please feel free to contact Student Wellbeing Service on wellbeing@gre.ac.uk

Panopto

Useful audio/video accessibility links