Winners of the 3rd Annual NC State Global Accessibility Awareness Day Challenge

(EDIT: Because of a technical problem, the Ocean Observing and Modeling Group site was not originally scanned accurately. The results have now been updated.)

Congratulations to the winners of the 3rd Annual NC State Global Accessibility Awareness Day Challenge! Together we corrected over 125,000 accessibility errors over the past month!

There were two aspects to the contest: which sites could correct the largest percentage of their accessibility errors and which sites could provide a link from their home page to the Accessibility @ NC State site.

Sites correcting the largest percentage of their accessibility errors

Sites linking to the Accessibility @ NC State site

  • BTEC
  • Center for Family and Community Engagement
  • Communication
  • DELTA
  • English
  • Foreign Languages and Literatures
  • Horticultural Science
  • Interdisciplinary Studies
  • IT Accessibility
  • Lebanese Studies
  • Ocean Observing and Modeling Group
  • Philanthropy Journal
  • Psychology
  • Recycling
  • Social Work
  • Sociology and Anthropology
  • Student Media
  • Technology Transfer
  • Textiles

2015 Global Accessibility Day Website Accessibility Challenge

NC State University will be hosting our Third Annual Global Accessibility Awareness Day Website Accessibility Challenge. The purpose of the challenge is to

  • promote accessibility throughout the campus
  • improve the accessibility of our websites
  • teach developers and content creators how to build accessibility into their Websites

The contest runs from April 15 through May 20, and the winners will be announced on Global Accessibility Awareness Day, May 21. There are two competitions.

  1. Sites which can correct the largest percentage of their accessibility errors
  2. Sites which can include a link to the Accessibility @ NC State page

Sites which can correct the largest percentage of their accessibility errors

One competition is to see which websites can have the greatest percentage of errors corrected during the contest. Winners will be selected from

  • Large sites (1000 or more pages)
  • Medium sites (100-999 pages)
  • Small sites (less than 100 pages)

Sites which can correct at least 50% or 75% of their errors will also be recognized.

Sites which can include a link to the Accessibility @ NC State page

It is important to allow users to be able to easily find accessibility information about our campus, whether they are looking for accessibility services or are having problems interacting with online content. We have a central place where all campus accessibility information can be found, including contact information – the Accessibility at NC State site.

For the contest, websites which include a link to this site from their home page will be recognized. Including this link is often done by placing it in the footer of your page.

<a href=”http://accessibility.ncsu.edu/”>Accessibility</a>

Learning more about Accessibility

Throughout the contest there are a number of opportunities to learn about accessibility and how to make your websites more accessible. Note: All training events are in the Avent Ferry Technology Center, where there is plenty of parking.

Website Accessibility Tune-Ups (Lunch & Learn series)

Bring your lunch and your website for a free website accessibility evaluation from Greg Kraus, the IT Accessibility Coordinator. You will see how he evaluates sites for accessibility and you will go away with actionable items that you can start implementing in your site to improve your accessibility. You can signup in ClassMate with the following links.

Web Accessibility Testing and Techniques

In this workshop you will learn about Web accessibility concepts, how to code for accessibility, and how to use common accessibility testing tools. You do not need to bring your own website for this workshop, but you can work on it if you want to.

Co-Working Day Drop-in Session

Every Friday several people from around campus gather in Avent Ferry Technology Center, Room 106 for a co-working event. A co-working event is just a place where people gather to work on their own stuff or collaborate with people who are in the room to help solve problems they are having. On May 1 and May 8 Greg Kraus will be at the Co-Working Day from 9 a.m. to 4 p.m. to answer any questions you may have about accessibility and work with you on the accessibility of your site. There is no sign up. Just drop on by.

The Incredible Accessible Modal Window, Version 3

I’ve made a few minor updates to the Incredible Accessible Modal Window.

  • removed the role=”document” from the contents of the window. This was originally inserted to deal with the way NVDA interacted with role=”dialog”, but that issue has since been resolved in NVDA.
  • made the close button an actual button instead of a link. I should have done this a long time ago and don’t know why I didn’t do it sooner.

View the Incredible Accessible Modal Window, version 3.

Previous versions of the Incredible Accessible Modal Window

Note: This is edited from the original post to reflect an error in the first bullet point where it originally said role=”dialog” instead of role=”document”.

 

Winners of the 2014 NC State Global Accessibility Awareness Day Website Challenge

Congratulations to all of the developers who participated in the 2014 NC State Global Accessibility Awareness Day Website Challenge. Together we corrected 905,082 accessibility errors!

For each of the size categories, the Web sites that corrected the largest percentage of their errors are

  • Large Sites (1000+ pages)
    • NCSU Libraries (84% of errors corrected)
  • Medium Sites (100-999 pages)
    • African American Cultural Center (66% of errors corrected)
  • Small Sites (1-99 pages)
    • Internal Audit (25% of errors corrected)

For the ARIA Landmark portion of the challenge we had 21 sites add the main landmark to at least 80% of their pages and 23 sites add the navigation landmark to at least 80% of their pages.

Just a  note, there are actually a lot more sites on campus using those two landmarks. Because of the way the scan was run, only those sites which requested a rescan during the contest were counted in these totals.

2014 NC State Global Accessibility Awareness Day Website Challenge

The University IT Accessibility Office is once again sponsoring the NC State Global Accessibility Awareness Day Website Challenge to encourage campus website owners and designers to make accessibility improvements to their websites. Global Accessibility Awareness Day is held annually “to get people talking, thinking and learning about digital accessibility and users with different disabilities.”

The contest, which runs April 15 to May 14, includes two categories:

  • Sites that improve their overall accessibility by the greatest percentage.
  • Sites that have the ARIA roles “main” and “navigation” added to at least 80 percent of their pages.

You can view the current leader boards and see how you stack up against the competition. The winners will be announced on Global Accessibility Awareness Day, which is May 15. Sites will be considered in three size categories for determining the largest percentage of accessibility errors corrected.

  • 1000+ Pages
  • 100-999 Pages
  • 1-99 Pages

You can learn more about adding ARIA landmarks to Web pages. Also, you can come to a number of workshops and training sessions over the next month to learn how to make your Websites more accessible.

Incredible Accessible Modal Window, Version 2

UPDATE: Read about version 3 of this demonstration.

Just take me to the demo of the Incredible Accessible Modal Window, Version 2.

Rich Caloggero at MIT and I were talking recently about screen reader support in the original Incredible Accessible Modal Window and he pointed out to me that different screen readers handle virtual cursor support differently in the modal window. This sent me further down the rabbit hole of screen reader support with ARIA and what exactly is going on.

The Situation

I hesitate to call this the “problem” because I’m not sure what the real problem is yet. I’m sure it is some combination of differing interpretations of specifications, technological limitations of different screen readers, design decisions, the needs of the user, and bugs. The situation is that all screen readers, except for NVDA, can use their virtual cursor to navigate a role=”dialog”.

The root of the situation is that a role=”dialog” should be treated as an application window. This fundamentally changes the way a screen reader user interacts with an object because the default way the user is to now interact with the application window is by using the application’s defined navigation methods. In other words, the screen reader user is not supposed to use their virtual cursor.

It is clear from the spec that when a screen reader encounters an object with an application-type role, it should stop capturing keyboard events and let them pass to the application. This in essence turns off the virtual cursor for JAWS and NVDA. What is not clear is if it is permissible for the user to optionally re-enable their virtual cursor within an application. JAWS says yes and NVDA says no. (Just a note, JAWS actually requires the user to manually enable application mode instead of doing it for them automatically.)

This has real world implications. Typically for a role=”dialog” the user would use their Tab key to navigate between focusable elements and read the page that way. But what if there is text within the modal dialog that is not associated with a focusable element in the modal dialog?

The spec says that “if coded in an accessible manner, all text will be semantically associated with focusable elements.” I think this is easily achievable in many situations, however, I question if it is practical in all situations. In my experience a lot of content is being crammed into some modal dialogs, sometimes more content than can always be neatly associated with focusable elements. In theory, with enough tabindex=”0” and aria-labelledby attributes you could associate everything with a focusable element, but I wonder if this would get too unwieldy in some situations.

There is always the question of if developers should be cramming so much information into modal dialogs, but that’s another discussion for another day. I’m simply trying to deal with the fact that people are putting so much content in there.

A further real world implication of the ability or inability to use the virtual cursor is if you allow users to use their virtual cursor in some situations in an application region, are there situations where that could end up hurting the user? For example, it’s not hard for me to imagine a modal dialog where it would be useful to allow the user the ability to navigate with their virtual cursor, however, if a screen reader user is interacting with Google Docs, which is in essence one large role=”application”, the results can be disastrous. Are there certain application contexts where we would want the user to be able to enable their virtual cursor and other contexts where we would want to prevent it? That just made things a lot more complicated.

Just to complicate things more, VoiceOver and ChromeVox don’t really have a concept, to my knowledge, of turning a virtual cursor on and off. That means they can browse the contents of the role=”dialog” any way they want, and there is not much I as a developer can do about it.

A Partial Solution?

One of the things Rich and I learned in this adventure is if you include a role=”document” inside of the role=”dialog”, then NVDA allows you to use the virtual cursor. This now gives all screen reader users the ability to fully navigate all of the contents.

Is this a good thing? Based on the reality of how people are actually implementing modal dialogs, I think it is. Some modal dialogs are in essence becoming miniature versions of Web pages, not just simple forms or messages. Given the alternative of having to programmatically shoehorn every piece of text into a relationship with a focusable element, I think this is a good option for some pages.

I still think that people should revisit the overall usability of their application which might require such complex modal dialogs in the first place. There are probably better ways to design the user interactions.

So is NVDA wrong in their implementation of not allowing virtual browsing in an application? I don’t think so. That is the intention behind the application region. Is JAWS wrong for allowing the use of the virtual cursor in an application? Probably not, because it is always good to give screen reader users the option of trying to save themselves from bad coding and using the virtual cursor might be the only way they can do that. However, my guess is that using the virtual cursor in something designed to be an application will usually lead to more confusion than assistance.

VoiceOver Improvements

One additional improvement – in the original version of the Incredible Accessible Modal Window there was a shim in place for VoiceOver users so that the aria-labelledby attribute would be automatically announced. VoiceOver in OS X 10.9 fixes this problem so the shim is not needed any more.

2013 NC State World Usability Day Website Challenge Results

Congratulations to all of the NC State Website owners who participated in NC State’s 2013 World Usability Day Website Challenge. NC State users can view the detailed results of the challenge. Website owners competed in two areas.

  1. Which sites, in their respective size categories, could correct the largest percentage of their accessibility errors in the month leading up to World Usability Day.
  2. Which sites could include a skip to main content link on at least 80% of their pages.

Accessibility Errors Corrected

Together we corrected a total of 416,196 accessibility errors for this challenge. Since the Accessibility Scan started in March of 2013, we have collectively corrected 1,188,908 accessibility errors.

Skip to Main Content Links

During this challenge we added 2,661 new skip to main content links across our pages, with 128 of our sites now having skip to main content links on at least 80% of their pages.

Congratulations again to all of the NC State Website owners!

NC State Web Accessibility Challenge on World Usability Day

NC State University’s Office of IT Accessibility is sponsoring a Web Site Accessibility Challenge in conjunction with World Usability Day. World Usability Day brings people together “to ensure that the services and products important to life are easier to access and simpler to use.” In order to encourage Web site owners to help make our university Web pages more accessible, there are two challenges.

  1. To address general usability, which sites can correct the largest percentage of their accessibility errors.
  2. To address users who cannot use a mouse, which sites can add a specific accessibility feature to at least 80% of their Web pages – the ability to allow users to skip to the main content of a page using only a keyboard.

To learn more about your Web site’s accessibility and to see tutorials on how to improve its accessibility, view your Web Site Accessibility Scan.

To learn more about adding skip to main content links to a page, view the Skip To Main Content Link Tutorial in the Web Accessibility Handbook.

The contest winners will be determined by the last rescan submitted by 11:59PM on November 13 and the winners will be announced on November 14 on World Usability Day.

Screen Readers at a Crossroads

I believe screen reading software stands at a crossroads right now. At Google I/O 2013, Google showed some of the possibilities of the ChromeVox API. What they demonstrated showed some fundamental changes in the ways screen reader software interacts with Web browsers. In this post I will discuss how I see this as a fundamental shift. I’ll discuss both the risks and rewards that I see with this model.

So what’s the big deal?

The first thing to look at is how does screen reading software typically interact with a Web page. Usually the software pulls data out of some model representing the Web page, interprets it, and presents it to the user. The data could be coming directly from the browser and the DOM or through the operating system’s accessibility layer. No matter where it gets that data, the screen reader almost always pulls the data then interprets the data itself based on the semantic markup on the page. The Web page does not usually push data to the screen reader software or tell the software how to interpret the data independent of the semantic markup. This means that when a screen reader user interacts with the page, every time they navigate somewhere or interact with an element, the screen reader is pulling information from the data source, interpreting it, and presenting it to the user.

This is why we tell people to build pages with good semantic structure and all of the other accessibility things we say. This way when a user encounters one of these elements the screen reader software can interpret what it is and present it to the user in a consistent way. So no matter what screen reader software you use, when something is coded as an <h1>, all screen reader software reports to their users that they are reading a heading level 1. Each screen reader application might speak this information differently or have slight variations for how you navigate the items, but there is always consistency within the screen reader application itself. This is good for both the screen reader user and the developer. The screen reader user can know that his heading navigation keys will always get him to a particular heading and to the next and previous headings. The developer doesn’t have to worry about how each screen reader will represent this <h1> to the user – they just know it will work. There is a standard which defines what <h1> means, and everyone agrees to follow that definition.

Now none of that has changed in ChromeVox. An <h1> is still reported as a heading level 1 to the user and the user can still navigate through the headings the same way. What has changed with the ChromeVox API is now the Web page has the ability to modify the way that an <h1> gets interpreted by the screen reading software. In fact, the ChromeVox API allows the Web page to reinterpret ANY semantic markup or even ANY action the screen reader user takes the way the page sees fit. The fundamental shift is from the screen reading software pulling and interpreting the data to the Web application interpreting and pushing the data to the screen reading software.

An example

To see this in action you can either watch the following YouTube videos demonstrating this or you can read the demonstration page using two different screen reading programs, ChromeVox and any other screen reader.

With this example, please keep in mind that I am not an expert on the ChromeVox API. This example is what I cobbled together after watching a presentation at Google I/O and seeing some sample code on their slides. There is not a well documented public API to do all of this yet to my knowledge.

In this example there is a simple page with four headings, some text, and an image. If you use any screen reader software other than ChromeVox the page will behave just as you expect it to. The user can browse the page linearly or jump around from heading to heading.

Page read with JAWS and Internet Explorer

If you read this page with ChromeVox you will have a very different experience because I have used the ChromeVox API to change the way certain semantic elements are presented to the user, and I’ve even overridden the natural flow of the page so unexpected things happen when you browse the page. The two items I have changed are:

  1. When using the commands to go to the next and previous headings, instead of relying on ChromeVox to announce the heading text and then say “heading level 1”, I have told ChromeVox to say “You have jumped to the next heading which is called <insert heading text>.“ I have redefined the built-in behavior of ChromeVox when it encounters headings when navigating to the next and previous headings.
  2. When browsing to the next and previous heading, when you try to go between the third and fourth headings, ChromeVox will tell you “You are not ready for the next heading yet. First you must spend time frolicking with the penguins. After that you may go to the next heading. Image. Penguins frolicking.” I have redefined ChromeVox navigation commands to do whatever I want, independent of the semantic structure of the page.

Page read with ChromeVox and Chrome

It seems silly, but there are serious implications

Yes, that example is rather sophomoric, but it proves a point. Despite using <h1> elements, I was able to present those elements to the user in a very non-standard way. Also, despite using a navigation technique that is only supposed to allow me to jump from heading to heading, I was able to force the screen reader user to the image, even though they requested the next or previous heading. I am not doing any keyboard trapping to do this. It’s all done with the ChromeVox API so ChromeVox will behave differently than expected.

So why would they do this?

Does Google have evil intentions with this to trick users? I don’t think so. Google is actually doing some pretty cool things with this. For instance, this is how they are adding additional support for MathML. ChromeVox now has some native support for MathML, but it doesn’t fully implement it. What if you are trying to express math in a way that ChromeVox does not support yet? As a Web page developer, you have the ability to write some JavaScript to access the ChromeVox API that tells ChromeVox to interpret certain MathML symbols differently than it would natively.

If you aren’t so mathematically inclined there are other benefits too. If you do have a user interface that is tremendously complex and doesn’t lend itself to navigation by semantic markup, you could make the screen reader do and say whatever you want based on the user’s input. There’s now no reason to tie yourself to semantic navigation or even ARIA attributes for trying to convey richer UI elements. You can in essence write your own screen reader for each application you develop, and just use ChromeVox as the TTS engine to actually speak it.

Is this a bad thing?

Not always, but it definitely opens the door to abuse. Most Web pages and applications can be written accessibly using semantic markup with ARIA attributes, and ChromeVox can still handle those things just fine. In fact, I bet Google will still encourage you to use standards in your Web page. What this opens the door to is creating ChromeVox-only solutions for certain Web pages and applications.

This page best viewed with Internet Explorer 6…

Are we really ready to go back to this, or is Google, as they claim, advancing Web accessibility with features that have never been possible before?

On the positive side, this has the potential to let developers create Web pages and applications accessible to a level that has not been possible before. However, will ARIA not suffice to meet most if not all of our needs?

On the negative side, creating custom user interfaces for one particular group of users means, in essence, creating two sets of code. Will all of the new features in the non-screen reader UI be translated instantly over to the screen reader UI?

Well I heard that screen reader users like it when …

How many times have we heard misinformed developers start a justification for a particular implementation with these words? With great power comes great responsibility. I know Google does not intend for developers to use this API in obnoxious ways, but it’s out there now, and the reality is it will get misused some. Do we want to trust the same developers who just now figured out that “spacer graphic” is never appropriate alt text to be able to define Web page navigation in a way that is “more superior” than just using good heading structure?

So where do we go from here?

If ChromeVox had a bigger market share, this conversation would probably be a little different. ChromeVox does have one advantage over other screen readers though. It is by far the most accessible way to interact with Google Apps. Are we experiencing a market shift? Is Google trying to redefine the way screen reader software should work with Web pages? Is Google promoting it’s own ecosystem as the superior answer to their competitors? It worked for Apple, iTunes, and iOS devices. Are we at that early stage where the benefits of the ecosystem are not yet fully realized? When big players with lots of money start playing, they like to change the rules of the game to give themselves the advantage. That’s the free market and it’s seldom ever a tidy process.

How will the other screen reader vendors respond? Will developers start utilizing this API in ways that make ChromeVox the necessary choice for their application? Is this just JAWS scripts now being implemented by Web developers? Does this fundamentally break the Web? Is this all just a tempest in a teapot?

I believe Google is in it to win it. They don’t see this as a research project or a neat idea. They believe they are advancing the state of Web accessibility. Do we agree with that?