2014 NC State Global Accessibility Awareness Day Website Challenge

The University IT Accessibility Office is once again sponsoring the NC State Global Accessibility Awareness Day Website Challenge to encourage campus website owners and designers to make accessibility improvements to their websites. Global Accessibility Awareness Day is held annually “to get people talking, thinking and learning about digital accessibility and users with different disabilities.”

The contest, which runs April 15 to May 14, includes two categories:

  • Sites that improve their overall accessibility by the greatest percentage.
  • Sites that have the ARIA roles “main” and “navigation” added to at least 80 percent of their pages.

You can view the current leader boards and see how you stack up against the competition. The winners will be announced on Global Accessibility Awareness Day, which is May 15. Sites will be considered in three size categories for determining the largest percentage of accessibility errors corrected.

  • 1000+ Pages
  • 100-999 Pages
  • 1-99 Pages

You can learn more about adding ARIA landmarks to Web pages. Also, you can come to a number of workshops and training sessions over the next month to learn how to make your Websites more accessible.

Incredible Accessible Modal Window, Version 2

Just take me to the demo of the Incredible Accessible Modal Window, Version 2.

Rich Caloggero at MIT and I were talking recently about screen reader support in the original Incredible Accessible Modal Window and he pointed out to me that different screen readers handle virtual cursor support differently in the modal window. This sent me further down the rabbit hole of screen reader support with ARIA and what exactly is going on.

The Situation

I hesitate to call this the “problem” because I’m not sure what the real problem is yet. I’m sure it is some combination of differing interpretations of specifications, technological limitations of different screen readers, design decisions, the needs of the user, and bugs. The situation is that all screen readers, except for NVDA, can use their virtual cursor to navigate a role=”dialog”.

The root of the situation is that a role=”dialog” should be treated as an application window. This fundamentally changes the way a screen reader user interacts with an object because the default way the user is to now interact with the application window is by using the application’s defined navigation methods. In other words, the screen reader user is not supposed to use their virtual cursor.

It is clear from the spec that when a screen reader encounters an object with an application-type role, it should stop capturing keyboard events and let them pass to the application. This in essence turns off the virtual cursor for JAWS and NVDA. What is not clear is if it is permissible for the user to optionally re-enable their virtual cursor within an application. JAWS says yes and NVDA says no. (Just a note, JAWS actually requires the user to manually enable application mode instead of doing it for them automatically.)

This has real world implications. Typically for a role=”dialog” the user would use their Tab key to navigate between focusable elements and read the page that way. But what if there is text within the modal dialog that is not associated with a focusable element in the modal dialog?

The spec says that “if coded in an accessible manner, all text will be semantically associated with focusable elements.” I think this is easily achievable in many situations, however, I question if it is practical in all situations. In my experience a lot of content is being crammed into some modal dialogs, sometimes more content than can always be neatly associated with focusable elements. In theory, with enough tabindex=”0” and aria-labelledby attributes you could associate everything with a focusable element, but I wonder if this would get too unwieldy in some situations.

There is always the question of if developers should be cramming so much information into modal dialogs, but that’s another discussion for another day. I’m simply trying to deal with the fact that people are putting so much content in there.

A further real world implication of the ability or inability to use the virtual cursor is if you allow users to use their virtual cursor in some situations in an application region, are there situations where that could end up hurting the user? For example, it’s not hard for me to imagine a modal dialog where it would be useful to allow the user the ability to navigate with their virtual cursor, however, if a screen reader user is interacting with Google Docs, which is in essence one large role=”application”, the results can be disastrous. Are there certain application contexts where we would want the user to be able to enable their virtual cursor and other contexts where we would want to prevent it? That just made things a lot more complicated.

Just to complicate things more, VoiceOver and ChromeVox don’t really have a concept, to my knowledge, of turning a virtual cursor on and off. That means they can browse the contents of the role=”dialog” any way they want, and there is not much I as a developer can do about it.

A Partial Solution?

One of the things Rich and I learned in this adventure is if you include a role=”document” inside of the role=”dialog”, then NVDA allows you to use the virtual cursor. This now gives all screen reader users the ability to fully navigate all of the contents.

Is this a good thing? Based on the reality of how people are actually implementing modal dialogs, I think it is. Some modal dialogs are in essence becoming miniature versions of Web pages, not just simple forms or messages. Given the alternative of having to programmatically shoehorn every piece of text into a relationship with a focusable element, I think this is a good option for some pages.

I still think that people should revisit the overall usability of their application which might require such complex modal dialogs in the first place. There are probably better ways to design the user interactions.

So is NVDA wrong in their implementation of not allowing virtual browsing in an application? I don’t think so. That is the intention behind the application region. Is JAWS wrong for allowing the use of the virtual cursor in an application? Probably not, because it is always good to give screen reader users the option of trying to save themselves from bad coding and using the virtual cursor might be the only way they can do that. However, my guess is that using the virtual cursor in something designed to be an application will usually lead to more confusion than assistance.

VoiceOver Improvements

One additional improvement – in the original version of the Incredible Accessible Modal Window there was a shim in place for VoiceOver users so that the aria-labelledby attribute would be automatically announced. VoiceOver in OS X 10.9 fixes this problem so the shim is not needed any more.

2013 NC State World Usability Day Website Challenge Results

Congratulations to all of the NC State Website owners who participated in NC State’s 2013 World Usability Day Website Challenge. NC State users can view the detailed results of the challenge. Website owners competed in two areas.

  1. Which sites, in their respective size categories, could correct the largest percentage of their accessibility errors in the month leading up to World Usability Day.
  2. Which sites could include a skip to main content link on at least 80% of their pages.

Accessibility Errors Corrected

Together we corrected a total of 416,196 accessibility errors for this challenge. Since the Accessibility Scan started in March of 2013, we have collectively corrected 1,188,908 accessibility errors.

Skip to Main Content Links

During this challenge we added 2,661 new skip to main content links across our pages, with 128 of our sites now having skip to main content links on at least 80% of their pages.

Congratulations again to all of the NC State Website owners!

NC State Web Accessibility Challenge on World Usability Day

NC State University’s Office of IT Accessibility is sponsoring a Web Site Accessibility Challenge in conjunction with World Usability Day. World Usability Day brings people together “to ensure that the services and products important to life are easier to access and simpler to use.” In order to encourage Web site owners to help make our university Web pages more accessible, there are two challenges.

  1. To address general usability, which sites can correct the largest percentage of their accessibility errors.
  2. To address users who cannot use a mouse, which sites can add a specific accessibility feature to at least 80% of their Web pages – the ability to allow users to skip to the main content of a page using only a keyboard.

To learn more about your Web site’s accessibility and to see tutorials on how to improve its accessibility, view your Web Site Accessibility Scan.

To learn more about adding skip to main content links to a page, view the Skip To Main Content Link Tutorial in the Web Accessibility Handbook.

The contest winners will be determined by the last rescan submitted by 11:59PM on November 13 and the winners will be announced on November 14 on World Usability Day.

Screen Readers at a Crossroads

I believe screen reading software stands at a crossroads right now. At Google I/O 2013, Google showed some of the possibilities of the ChromeVox API. What they demonstrated showed some fundamental changes in the ways screen reader software interacts with Web browsers. In this post I will discuss how I see this as a fundamental shift. I’ll discuss both the risks and rewards that I see with this model.

So what’s the big deal?

The first thing to look at is how does screen reading software typically interact with a Web page. Usually the software pulls data out of some model representing the Web page, interprets it, and presents it to the user. The data could be coming directly from the browser and the DOM or through the operating system’s accessibility layer. No matter where it gets that data, the screen reader almost always pulls the data then interprets the data itself based on the semantic markup on the page. The Web page does not usually push data to the screen reader software or tell the software how to interpret the data independent of the semantic markup. This means that when a screen reader user interacts with the page, every time they navigate somewhere or interact with an element, the screen reader is pulling information from the data source, interpreting it, and presenting it to the user.

This is why we tell people to build pages with good semantic structure and all of the other accessibility things we say. This way when a user encounters one of these elements the screen reader software can interpret what it is and present it to the user in a consistent way. So no matter what screen reader software you use, when something is coded as an <h1>, all screen reader software reports to their users that they are reading a heading level 1. Each screen reader application might speak this information differently or have slight variations for how you navigate the items, but there is always consistency within the screen reader application itself. This is good for both the screen reader user and the developer. The screen reader user can know that his heading navigation keys will always get him to a particular heading and to the next and previous headings. The developer doesn’t have to worry about how each screen reader will represent this <h1> to the user – they just know it will work. There is a standard which defines what <h1> means, and everyone agrees to follow that definition.

Now none of that has changed in ChromeVox. An <h1> is still reported as a heading level 1 to the user and the user can still navigate through the headings the same way. What has changed with the ChromeVox API is now the Web page has the ability to modify the way that an <h1> gets interpreted by the screen reading software. In fact, the ChromeVox API allows the Web page to reinterpret ANY semantic markup or even ANY action the screen reader user takes the way the page sees fit. The fundamental shift is from the screen reading software pulling and interpreting the data to the Web application interpreting and pushing the data to the screen reading software.

An example

To see this in action you can either watch the following YouTube videos demonstrating this or you can read the demonstration page using two different screen reading programs, ChromeVox and any other screen reader.

With this example, please keep in mind that I am not an expert on the ChromeVox API. This example is what I cobbled together after watching a presentation at Google I/O and seeing some sample code on their slides. There is not a well documented public API to do all of this yet to my knowledge.

In this example there is a simple page with four headings, some text, and an image. If you use any screen reader software other than ChromeVox the page will behave just as you expect it to. The user can browse the page linearly or jump around from heading to heading.

Page read with JAWS and Internet Explorer

If you read this page with ChromeVox you will have a very different experience because I have used the ChromeVox API to change the way certain semantic elements are presented to the user, and I’ve even overridden the natural flow of the page so unexpected things happen when you browse the page. The two items I have changed are:

  1. When using the commands to go to the next and previous headings, instead of relying on ChromeVox to announce the heading text and then say “heading level 1”, I have told ChromeVox to say “You have jumped to the next heading which is called <insert heading text>.“ I have redefined the built-in behavior of ChromeVox when it encounters headings when navigating to the next and previous headings.
  2. When browsing to the next and previous heading, when you try to go between the third and fourth headings, ChromeVox will tell you “You are not ready for the next heading yet. First you must spend time frolicking with the penguins. After that you may go to the next heading. Image. Penguins frolicking.” I have redefined ChromeVox navigation commands to do whatever I want, independent of the semantic structure of the page.

Page read with ChromeVox and Chrome

It seems silly, but there are serious implications

Yes, that example is rather sophomoric, but it proves a point. Despite using <h1> elements, I was able to present those elements to the user in a very non-standard way. Also, despite using a navigation technique that is only supposed to allow me to jump from heading to heading, I was able to force the screen reader user to the image, even though they requested the next or previous heading. I am not doing any keyboard trapping to do this. It’s all done with the ChromeVox API so ChromeVox will behave differently than expected.

So why would they do this?

Does Google have evil intentions with this to trick users? I don’t think so. Google is actually doing some pretty cool things with this. For instance, this is how they are adding additional support for MathML. ChromeVox now has some native support for MathML, but it doesn’t fully implement it. What if you are trying to express math in a way that ChromeVox does not support yet? As a Web page developer, you have the ability to write some JavaScript to access the ChromeVox API that tells ChromeVox to interpret certain MathML symbols differently than it would natively.

If you aren’t so mathematically inclined there are other benefits too. If you do have a user interface that is tremendously complex and doesn’t lend itself to navigation by semantic markup, you could make the screen reader do and say whatever you want based on the user’s input. There’s now no reason to tie yourself to semantic navigation or even ARIA attributes for trying to convey richer UI elements. You can in essence write your own screen reader for each application you develop, and just use ChromeVox as the TTS engine to actually speak it.

Is this a bad thing?

Not always, but it definitely opens the door to abuse. Most Web pages and applications can be written accessibly using semantic markup with ARIA attributes, and ChromeVox can still handle those things just fine. In fact, I bet Google will still encourage you to use standards in your Web page. What this opens the door to is creating ChromeVox-only solutions for certain Web pages and applications.

This page best viewed with Internet Explorer 6…

Are we really ready to go back to this, or is Google, as they claim, advancing Web accessibility with features that have never been possible before?

On the positive side, this has the potential to let developers create Web pages and applications accessible to a level that has not been possible before. However, will ARIA not suffice to meet most if not all of our needs?

On the negative side, creating custom user interfaces for one particular group of users means, in essence, creating two sets of code. Will all of the new features in the non-screen reader UI be translated instantly over to the screen reader UI?

Well I heard that screen reader users like it when …

How many times have we heard misinformed developers start a justification for a particular implementation with these words? With great power comes great responsibility. I know Google does not intend for developers to use this API in obnoxious ways, but it’s out there now, and the reality is it will get misused some. Do we want to trust the same developers who just now figured out that “spacer graphic” is never appropriate alt text to be able to define Web page navigation in a way that is “more superior” than just using good heading structure?

So where do we go from here?

If ChromeVox had a bigger market share, this conversation would probably be a little different. ChromeVox does have one advantage over other screen readers though. It is by far the most accessible way to interact with Google Apps. Are we experiencing a market shift? Is Google trying to redefine the way screen reader software should work with Web pages? Is Google promoting it’s own ecosystem as the superior answer to their competitors? It worked for Apple, iTunes, and iOS devices. Are we at that early stage where the benefits of the ecosystem are not yet fully realized? When big players with lots of money start playing, they like to change the rules of the game to give themselves the advantage. That’s the free market and it’s seldom ever a tidy process.

How will the other screen reader vendors respond? Will developers start utilizing this API in ways that make ChromeVox the necessary choice for their application? Is this just JAWS scripts now being implemented by Web developers? Does this fundamentally break the Web? Is this all just a tempest in a teapot?

I believe Google is in it to win it. They don’t see this as a research project or a neat idea. They believe they are advancing the state of Web accessibility. Do we agree with that?

The Gamification of Accessibility, Round 1: Lessons Learned

NC State held its first ever Global Accessibility Awareness Day Challenge to see which campus Web sites could correct the largest number of accessibility errors. The challenge was based upon automated scans of Web sites which evaluated for both Section 508 and WCAG 2 conformance. The winners were the sites which corrected the highest percentage of errors. The challenge was just one part of the larger accessibility scanning system we have set up.

I could write up a lot about the system that I’ve created to do this, and I’ll have more details about the mechanics of the larger system, but I wanted to share some lessons learned about one of the more novel aspects about this system – the game aspect. What happens when you introduce gaming aspects into accessibility?

For this project, the game aspects I introduced were the following.

  • the ability to see where you rank anonymously among all other sites on campus
  • guides for how to improve your site
  • the ability to quickly see how changes you make impact your accessibility ranking
  • awards and recognition, not just for the “winners” in each category, but for all sites which show significant improvement

1. Sometimes it’s all about the game…friendly competition is a motivator

In fact, some people will fix things without you even having to ask them. If I had approached a group on campus and said, “You have 28,000 accessibility errors spread across 8000 pages and I need you to correct them,” let’s face it, I would have gotten no traction. Perhaps they feel that if they’ve had 28,000 errors for this long, why do they need to fix them now? Perhaps they had other pressing matters they had to deal with. Who knows why.

Instead, I sent out an email that in essence said, “We are having a contest to see who can correct the greatest percentage of accessibility errors in their site over the next 2 weeks.” When they went to view their current standing they saw that they were ranked as the 371st most accessible site out of 385 and ranked in the bottom 10% in all categories for all sites on campus. At the end of two weeks the site in question had corrected about 27,500 errors, ranked as the 40th most accessible site out of 385, and was in the top 5% in all categories for all sites on campus. I didn’t have to ask for this group to do anything – they just did it themselves. They ended up doing quite well in the contest. After the contest was over I saw one of the directors which oversees this group who had only learned about the contest after it was over and I told him that I was co-opting his employees for my own purposes.

2. But sometimes it’s all about the beer…it’s about building relationships

Yes, we play games because we like to win, but we also play games with people because we like to be with the other people. When I started the contest I knew who several of the participants would be because I’ve gotten to know them over the years and knew they would jump all over this. The great surprise though is who else shows up to play the game. It’s like a “pick-up” game of accessibility fixes. You bring a ball to the playground and you never know who is going to want to play.

Lots of people decided to come play. Currently 65% of all of the Web sites on campus have had someone claim them as their own and look at their accessibility report. What percentage of people normally would hit “delete” or “mark as spam” when they receive an email from “the accessibility guy” saying he has an accessibility report for them? (I’m not saying that would ever happen here at NC State – it’s just a hypothetical question.)

Not all of those Web site owners decided to play in the contest, but them coming to the game and the discussions I had with them led to the next three lessons learned.

3. Have training and tutorials immediately available to users that solve the specific problems they are trying to fix

WCAG 2.0 is great, but let’s face it, not everyone was meant to have a deep meaningful relationship with the WCAG documentation. The documentation is extremely thorough, but that’s also what makes it so unapproachable for so many people. The documentation is War and Peace, but what people really want is the Cliff’s Notes version.

Most people aren’t like me. They don’t like to read about all of the nuances and possible implementations for each success criteria in WCAG or come up with creative implementations that haven’t been thought of yet. They have other jobs they want to do.

The moral of the story – don’t send people to the WCAG documentation to learn more about their errors or how to fix them. Tell them in your own words. You know what problems they are facing, so give them the solutions they need, not EVERY solution and nuance possible.

In our system when users view a list of their accessibility errors, along with that they get a brief description of the error that I wrote along with a link to a tutorial that I wrote that gets them exactly what they need. Many of the examples I use come straight out of the WCAG documentation, but I’ve streamlined the process.  As an example, when users see that they don’t have the language of the page defined, they get a link for Defining the Language of the Document

Many of these tutorials were written before the game began, but they were updated as new questions came in.

4. Different people come to the game with different skill levels, so you have to figure out how to meet people where they are

I would love to play basketball with Michael Jordan, but Michael Jordan might not like playing with me. How do you create a game space where people with varying skill levels can play?

When the game was first envisioned it was designed with Web developers in mind – people who were comfortable with viewing the source code of documents. I always knew I wanted to reach out to people beyond the traditional Web developer group, like our communications people and content creators. I thought one of my “ins” with these groups was I could also report to people how many broken links and misspelled words they had. While people might not make accessibility a top priority in their jobs, correcting misspelled words and broken links were a high priority for professionally produced content.

When I sent the communicators and content creators information about the game, I stated that they might want to forward much of this information to their Web developers. Then came the day when I taught a class on Interpreting your Accessibility Scan Results, and the only people that showed up were content creators. To them source code was a foreign language. So how do you teach about an accessibility report that talks in technical terms and gives line numbers of where errors occur? Or an even bigger problem, how do you speak to someone about an accessibility report who doesn’t even understand the concept of accessibility?

We spent the hour looking at their reports and starting to break out which errors were probably things their Web developers would have to fix and which were things that they as content creators would have to fix. Out of that discussion we were able to start generalizing probabilities of whether certain errors applied to them or applied to their developers. That information will soon make it’s way into everyone’s report, which leads us to the next lesson learned.

5. Don’t get too attached to your game design, because it probably could have been designed better and should have been designed better

Creating good games is hard and the best ones change when they need to. Did you know that in the original rules for basketball dribbling was not allowed?
original basketball rules, types on faded yellow paper

You might not get all of your rules right the first time either. Be open to suggestions that people give you for improvements. Also be very responsive with changes, especially when you have one of those moments when accessibility is getting their undivided attention.

Ever since the game went live, here are some of the suggestions that people gave that I implemented, usually within a day or two of receiving the request.

  1. ability to request rescans as often as you want them as opposed to once a quarter – this took a major overhaul of some of the back-end processing
  2. showing historical data for how a site has improved over time
  3. instead of showing line numbers where misspelled words are, show the actual misspelled words – this was not as easy as it seems given the tool we were using
  4. added distribution graphs to show where there site falls in terms of total errors

6. Automated scans miss some of the biggest accessibility problems

I knew this going in, but it was really obvious when I saw some of the results. One of our sites which did quite well in the contest still had significant accessibility problems with it that the scan didn’t pick up. There are certain aspects of accessibility that automated scans simply cannot accurately asses. For example, if a site uses a table-based layout, how do you determine if an actual data table embedded in the layout table is coded correctly? How do you even know if it’s a data table if there are no table headers, captions, or scope or summary attributes? Automated tests also fail at assessing user interaction with a Web page, like testing for true keyboard accessibility.

This automated scan doesn’t solve all of our problems, but it lets us take a step, which leads us to the next lesson.

7. Mario didn’t save the princess in world 1-1

Super Mario Brothers screen shot, with Mario standing in world 1-1

I think one of the things that often holds us back in the accessibility field is we want to make sure we say and do everything as perfectly as possible and that we don’t leave anything out. I think being accurate and complete is very important, however, if completeness becomes the thing which prevents us from making any progress then we aren’t helping anyone.

I think this problem impacts both people who are trying to create accessible content and those of us trying to help them do it. We will always find more accessibility problems to fix. We will always find a more accessible way to implement something. As teachers we will always find better ways to say something and more complete examples to give. But you have to get it out the door. I believe in release early, iterate often.

In fact one of the aspects of games is the ability to slowly improve yourself over time. When you are able to slowly yet meaningfully make progress on improving your Web site’s accessibility, that can be very rewarding. Being able to say, “two months ago I was here, but now I’m here” is powerful.

In game design I think this approach is important too. Like I said, the tool as it is now doesn’t solve all of our problems, but we’ve made meaningful steps. I can now add new features into the game to help address some of the remaining deficiencies we have. Here are some of the planned new features.

  1. There will soon be a set of manual tests that you can do and earn extra points for your score once you complete and pass the test. For example, if you do a test to see if the keyboard focus is visible at all times you will get some bonus points. If you actually make the keyboard focus visible at all times you will get even more points.
  2. I am looking at adding additional evaluation tools into the mix that are better at detecting certain errors than our current toolset is able to do.
  3. And as a teaser, I’m also looking to add in some artificial intelligence and statistical techniques to assess for some problems that there are no good ways to test for yet. Stay tuned for this one.

8. If something good comes out of the game, share it with the world

The final lesson for this round – if you had fun and something good came of the game, share it with others. We had two projects come out of our challenge that are now available to everyone.

First, we now have a new Drupal module that automatically looks for links that are coded to open in new windows and it appends some text to the link to alert the user to this. The link is hidden offscreen for screen reader users and comes into view when the link receives keyboard focus or a mouse hover.

Second, we now have a bookmarklet that lets you easily assess the reading level of published Web pages.

I’ve written a lot and could write more, but it will have to wait for the next post.

How to Use Video.js – The Big Picture

This is a high-level overview of how to use Video.js to display your videos in HTML5 pages. This assumes you know some basics about video. Using Video.js is a three step process.

  1. Creating the video in the appropriate format
  2. Creating your transcript
  3. Inserting the correct HTML code in the page

Creating the video in the appropriate format

One of the strengths of HTML5 is the ability to add video right in the Web page without any additional software, like QuickTime or Flash. However, due to licensing conflicts and turf wars, not all browsers support all video formats. Fortunately, with HTML5 video you can provide the video in multiple formats and the browser will choose the one it can play.

To get this to work there are a few things to keep in mind. At a minimum, you should provide your video as an MP4 file. This will work in all browsers except for Firefox. Unless you want your Firefox users to have to use the Video.js Flash fallback option, you should also include the video as a WebM file.

If you need help converting your video from one format into another, use the Miro Video Converter.

Creating the captions

To create captions the first thing you need to do is create a plain-text version of what is said in the video. If the video is short, like 5 minutes long, this is easy to do in a text editor. If it is longer, you might want to pay someone to create a transcript for you. The IT Accessibility Office can provide you with a list of transcription companies.

After creating the transcript you have to add in the time stamps to the file to tell when each piece of text is supposed to display on the screen. The IT Accessibility Office provides a free service to add time stamps. If you are interested, please contact the IT Accessibility Office.

Inserting the correct HTML code in the page

After you have your video file(s) and caption file, upload them to the server along with an HTML5 file with the following code embedded in it.

In the <head> section:

<link href="http://vjs.zencdn.net/4.0/video-js.css" rel="stylesheet">
<script src="http://vjs.zencdn.net/4.0/video.js"></script>

Somewhere in the <body> section:

<video id="my_vid_id" class="video-js vjs-default-skin" controls preload="auto" width="640" height="264" poster="poster.png" data-setup='{}'>
   <source src="movie.mp4" type='video/mp4' />
   <source src="movie.webm" type='video/webm' />
   <track kind="captions" src="captions.vtt" srclang="en" label="English" />
</video>

When this page is loaded in a Web browser, Video.js will

  1. check to see which of the video formats the browser natively supports
  2. load the Flash fallback video player to handle any browser/video format compatibility problems
  3. replace the browser’s default video controls with its video controls

A New Look-And-Feel

I’ve updated the blog with a new template. It’s one that we use on campus on several sites. I like this one better than my old one, and this way any accessibility work I do on it will also be transferred to several other sites around campus.

Accessible Video.js Player Available on Global Accessibility Awareness Day

Video.js is an HTML5 video player that makes embedding video in HTML5 pages very easy and gives you a consistent look-and-feel across browsers. Video.js will use the browser’s built-in ability to play video in the format the browser prefers, but it uses a standard set of user interface (UI) controls that work across browsers. It will also work on mobile devices.

One of the strengths of Video.js is how it solves one of the big problems with HTML5 video – browser support of codecs. If you’ve ever worked with HTML5 video you know you usually have to provide your video in at least two formats to ensure that it will work in all browsers – MP4 and WebM. With Video.js, you can still provide the video in multiple formats and it will use the browser’s built-in ability to play that video in the preferred format, however, if a browser does not support one of the formats you have provided, Video.js will use a Flash fall-back player to play the video while still using its standard set of UI controls.

Now one of the other great strengths of Video.js is the player controls are accessible. Video.js is an open source project and I’ve been working with the community to make the player more accessible. With this new version of Video.js released today, which is coincidentally Global Accessibility Awareness Day, the player is now accessible to keyboard-only users, screen reader users, and voice-interface users. The specific accessibility improvements are:

  • The UI controls are keyboard accessible, including seeing the visual focus on the controls.
  • The UI controls are named properly and the status of each is updated by the appropriate ARIA attributes, so screen reader users can fully interact with the player. It works with JAWS, NVDA, and VoiceOver. The accessible controls include the following:
    • Play/Pause Button
    • Progress Bar
    • Time Elapsed
    • Time Remaining
    • Fullscreen Toggle
    • Volume Slider
    • Mute Button
    • Caption Button
  • The tab order for the UI elements has been altered to make the flow more logical.
  • The font size of the caption track is now enlarged when viewing the video in full screen mode.

Making Video.js accessible is a work in progress and there are still a few things that need to be done.

  1. The tab order probably needs to be adjusted some more to make it more logical. To alter it any more that what it is right now will take some significant modifications to the CSS and markup.
  2. Full keyboard accessibility needs to be more robust. Because the player controls show and hide themselves based on if the mouse is hovering over the player or not, this causes problems for keyboard users. The solution isn’t as simple as just adding in the appropriate onfocus and onblur events, because the video player is made up of multiple UI elements, and bubbling up the focus through multiple elements takes some extra testing to make sure it works in all browsers. However, currently there is a little of what I call “accessibility by accident.” If you start playing the video by using the keyboard, the mouse never enters or exits the video area, thus never triggering the show/hide event. That makes the player controls stay visible all the time. The downside is that the controls cover the bottom portion of the video, albeit in a semi-transparent manner. There are, however, a couple of edge cases where the controls do end up hiding themselves. Coming up with a more robust solution will solve this problem. Playing the video in full screen mode seems to also eliminate the problematic edge cases.
  3. Currently the caption button is acting as a menu with a sub-menu. To get it to truly act the way a submenu is supposed to will require some more significant modifications to the code.
  4. The time progress bar currently increments and decrements by 5 seconds when using the left and right arrow keys. This is better than what it was before, when it only changed by 1 second. I’m not sure what the ideal amount is and am open to suggestions.
  5. Support for technologies like Dragon Naturally Speaking need to be more robust. You can currently navigate the entire UI with Dragon, but it could be made more elegant.

If users, particularly keyboard-only users or screen reader users, have difficulty interacting with the player, viewing it in full screen mode might correct some of the problems.

There was no long term plan to release this today on Global Accessibility Awareness Day – it was just serendipitous.

2013 Global Accessibility Awareness Day Challenge Results

Congratulations to all of the Web site owners who participated in NC State’s First Annual Global Accessibility Awareness Day Challenge! As a group we corrected 194,232 accessibility errors across many NC State Web sites over the past two weeks. Here is a summary of some of what we fixed.

  • 96,747 link related problems
  • 23,702 alternative text instance
  • 15,546 heading structure problems
  • 14,586 code validation errors
  • 9,963 keyboard access problems
  • 5,930 form element problems
  • 4,749 table problems
  • 1,792 language definition problems

For the full listing with the final standings, visit the NC State Global Accessibility Awareness Day Website Challenge page.
In the next blog post, I’ll discuss some lessons learned from this contest.