Screen Readers at a Crossroads

I believe screen reading software stands at a crossroads right now. At Google I/O 2013, Google showed some of the possibilities of the ChromeVox API. What they demonstrated showed some fundamental changes in the ways screen reader software interacts with Web browsers. In this post I will discuss how I see this as a fundamental shift. I’ll discuss both the risks and rewards that I see with this model.

So what’s the big deal?

The first thing to look at is how does screen reading software typically interact with a Web page. Usually the software pulls data out of some model representing the Web page, interprets it, and presents it to the user. The data could be coming directly from the browser and the DOM or through the operating system’s accessibility layer. No matter where it gets that data, the screen reader almost always pulls the data then interprets the data itself based on the semantic markup on the page. The Web page does not usually push data to the screen reader software or tell the software how to interpret the data independent of the semantic markup. This means that when a screen reader user interacts with the page, every time they navigate somewhere or interact with an element, the screen reader is pulling information from the data source, interpreting it, and presenting it to the user.

This is why we tell people to build pages with good semantic structure and all of the other accessibility things we say. This way when a user encounters one of these elements the screen reader software can interpret what it is and present it to the user in a consistent way. So no matter what screen reader software you use, when something is coded as an <h1>, all screen reader software reports to their users that they are reading a heading level 1. Each screen reader application might speak this information differently or have slight variations for how you navigate the items, but there is always consistency within the screen reader application itself. This is good for both the screen reader user and the developer. The screen reader user can know that his heading navigation keys will always get him to a particular heading and to the next and previous headings. The developer doesn’t have to worry about how each screen reader will represent this <h1> to the user – they just know it will work. There is a standard which defines what <h1> means, and everyone agrees to follow that definition.

Now none of that has changed in ChromeVox. An <h1> is still reported as a heading level 1 to the user and the user can still navigate through the headings the same way. What has changed with the ChromeVox API is now the Web page has the ability to modify the way that an <h1> gets interpreted by the screen reading software. In fact, the ChromeVox API allows the Web page to reinterpret ANY semantic markup or even ANY action the screen reader user takes the way the page sees fit. The fundamental shift is from the screen reading software pulling and interpreting the data to the Web application interpreting and pushing the data to the screen reading software.

An example

To see this in action you can either watch the following YouTube videos demonstrating this or you can read the demonstration page using two different screen reading programs, ChromeVox and any other screen reader.

With this example, please keep in mind that I am not an expert on the ChromeVox API. This example is what I cobbled together after watching a presentation at Google I/O and seeing some sample code on their slides. There is not a well documented public API to do all of this yet to my knowledge.

In this example there is a simple page with four headings, some text, and an image. If you use any screen reader software other than ChromeVox the page will behave just as you expect it to. The user can browse the page linearly or jump around from heading to heading.

Page read with JAWS and Internet Explorer

If you read this page with ChromeVox you will have a very different experience because I have used the ChromeVox API to change the way certain semantic elements are presented to the user, and I’ve even overridden the natural flow of the page so unexpected things happen when you browse the page. The two items I have changed are:

  1. When using the commands to go to the next and previous headings, instead of relying on ChromeVox to announce the heading text and then say “heading level 1”, I have told ChromeVox to say “You have jumped to the next heading which is called <insert heading text>.“ I have redefined the built-in behavior of ChromeVox when it encounters headings when navigating to the next and previous headings.
  2. When browsing to the next and previous heading, when you try to go between the third and fourth headings, ChromeVox will tell you “You are not ready for the next heading yet. First you must spend time frolicking with the penguins. After that you may go to the next heading. Image. Penguins frolicking.” I have redefined ChromeVox navigation commands to do whatever I want, independent of the semantic structure of the page.

Page read with ChromeVox and Chrome

It seems silly, but there are serious implications

Yes, that example is rather sophomoric, but it proves a point. Despite using <h1> elements, I was able to present those elements to the user in a very non-standard way. Also, despite using a navigation technique that is only supposed to allow me to jump from heading to heading, I was able to force the screen reader user to the image, even though they requested the next or previous heading. I am not doing any keyboard trapping to do this. It’s all done with the ChromeVox API so ChromeVox will behave differently than expected.

So why would they do this?

Does Google have evil intentions with this to trick users? I don’t think so. Google is actually doing some pretty cool things with this. For instance, this is how they are adding additional support for MathML. ChromeVox now has some native support for MathML, but it doesn’t fully implement it. What if you are trying to express math in a way that ChromeVox does not support yet? As a Web page developer, you have the ability to write some JavaScript to access the ChromeVox API that tells ChromeVox to interpret certain MathML symbols differently than it would natively.

If you aren’t so mathematically inclined there are other benefits too. If you do have a user interface that is tremendously complex and doesn’t lend itself to navigation by semantic markup, you could make the screen reader do and say whatever you want based on the user’s input. There’s now no reason to tie yourself to semantic navigation or even ARIA attributes for trying to convey richer UI elements. You can in essence write your own screen reader for each application you develop, and just use ChromeVox as the TTS engine to actually speak it.

Is this a bad thing?

Not always, but it definitely opens the door to abuse. Most Web pages and applications can be written accessibly using semantic markup with ARIA attributes, and ChromeVox can still handle those things just fine. In fact, I bet Google will still encourage you to use standards in your Web page. What this opens the door to is creating ChromeVox-only solutions for certain Web pages and applications.

This page best viewed with Internet Explorer 6…

Are we really ready to go back to this, or is Google, as they claim, advancing Web accessibility with features that have never been possible before?

On the positive side, this has the potential to let developers create Web pages and applications accessible to a level that has not been possible before. However, will ARIA not suffice to meet most if not all of our needs?

On the negative side, creating custom user interfaces for one particular group of users means, in essence, creating two sets of code. Will all of the new features in the non-screen reader UI be translated instantly over to the screen reader UI?

Well I heard that screen reader users like it when …

How many times have we heard misinformed developers start a justification for a particular implementation with these words? With great power comes great responsibility. I know Google does not intend for developers to use this API in obnoxious ways, but it’s out there now, and the reality is it will get misused some. Do we want to trust the same developers who just now figured out that “spacer graphic” is never appropriate alt text to be able to define Web page navigation in a way that is “more superior” than just using good heading structure?

So where do we go from here?

If ChromeVox had a bigger market share, this conversation would probably be a little different. ChromeVox does have one advantage over other screen readers though. It is by far the most accessible way to interact with Google Apps. Are we experiencing a market shift? Is Google trying to redefine the way screen reader software should work with Web pages? Is Google promoting it’s own ecosystem as the superior answer to their competitors? It worked for Apple, iTunes, and iOS devices. Are we at that early stage where the benefits of the ecosystem are not yet fully realized? When big players with lots of money start playing, they like to change the rules of the game to give themselves the advantage. That’s the free market and it’s seldom ever a tidy process.

How will the other screen reader vendors respond? Will developers start utilizing this API in ways that make ChromeVox the necessary choice for their application? Is this just JAWS scripts now being implemented by Web developers? Does this fundamentally break the Web? Is this all just a tempest in a teapot?

I believe Google is in it to win it. They don’t see this as a research project or a neat idea. They believe they are advancing the state of Web accessibility. Do we agree with that?

Accessibility and Code Development – 10 Thoughts from a MeetUp

I recently attended a MeetUp of usability specialists involved in agile development who discussed how usability can be successfully incorporated into the agile process. I don’t pretend to be an expert in either user experience testing or in agile development, but I think many of the topics that were discussed can be translated into accessibility and code development in general. I’m going to take some of the salient points from the discussion and present them as some accessibility and development tips. I present these more as thoughts than definitive how-to guides.

1. Deal with the high level problems first and fill in the pieces later.

This tends to work better in an agile environment where you are iterating very often and know a new release is only a few weeks away. In terms of accessibility specifically, no matter what development style you are using, just because you can’t fix all of the problems right now doesn’t mean you shouldn’t fix some of the problems. (Wow, that was a triple negative. In other words, do something rather than nothing.)

2. Establishing standards and guidelines upfront is important.

Developers have standards for acceptable coding styles and practices and if that style isn’t used the code isn’t accepted. Extending this to the way UI elements are presented is a natural progression. Be sure to talk about how accessibility will be incorporated into the overall process and have people on board with it. Don’t shoehorn accessibility in as an afterthought.

3. Having developers witness people encountering accessibility problems is priceless.

Sometimes you will find that the developer has fixed the bug before you can even get back to your desk to discuss what went wrong.

4. Make sure the team understands its not just your idea.

These needs come from users’ experience and there is data to back it up.

5. Value and trust people.

Value the role everyone on the team plays and recognize that without their contribution, the whole process will fail. You work with professionals who are good at what they do.

6. Embrace the toxic person.

Instead of ignoring the person who is grumbling and sowing dissent, give them what they want – make them the center of attention. Invite that person to be involved in some usability testing with people with disabilities. They will either join the party or dig their own grave all on their own.

7. Having an accessible design is pointless if it can’t be implemented.

I don’t mean that there are a slew of Web apps out there that are too complex to even attempt to make accessible. What I mean is make sure there is communication between you and the developer to make sure you both understand what is being asked. Sometimes when we express a need from an accessibility perspective there are certain technical implications that will mean massive amounts of work that we as the non-developer might not be aware of. If there is reluctance on the developer’s part to implement a certain feature find out why. It might be a misunderstanding. It might be a part of a feature that is not necessary but would just be nice to have and it’s OK to leave out part of it.

8. Find that one developer you can recruit to your purposes.

If you have someone on “the other side” working for you, it’s much easier to get your message across.

9. Work in the language that developers work in.

Don’t write design specs. Write code.

10. Questions to think about while working with developers.

  • How do you get people to respect everyone else’s role in the process? As an accessibility specialist, you are already in the business of building bridges. How can you take that skill set to help less-than-functional teams learn to work more effectively?
  • How do you make it so you aren’t the reason that a delivery date was missed? Is all of your input received as something which gums up the process? How can you make your input flow more fluidly in their development process?

The Gamification of Accessibility, Round 1: Lessons Learned

NC State held its first ever Global Accessibility Awareness Day Challenge to see which campus Web sites could correct the largest number of accessibility errors. The challenge was based upon automated scans of Web sites which evaluated for both Section 508 and WCAG 2 conformance. The winners were the sites which corrected the highest percentage of errors. The challenge was just one part of the larger accessibility scanning system we have set up.

I could write up a lot about the system that I’ve created to do this, and I’ll have more details about the mechanics of the larger system, but I wanted to share some lessons learned about one of the more novel aspects about this system – the game aspect. What happens when you introduce gaming aspects into accessibility?

For this project, the game aspects I introduced were the following.

  • the ability to see where you rank anonymously among all other sites on campus
  • guides for how to improve your site
  • the ability to quickly see how changes you make impact your accessibility ranking
  • awards and recognition, not just for the “winners” in each category, but for all sites which show significant improvement

1. Sometimes it’s all about the game…friendly competition is a motivator

In fact, some people will fix things without you even having to ask them. If I had approached a group on campus and said, “You have 28,000 accessibility errors spread across 8000 pages and I need you to correct them,” let’s face it, I would have gotten no traction. Perhaps they feel that if they’ve had 28,000 errors for this long, why do they need to fix them now? Perhaps they had other pressing matters they had to deal with. Who knows why.

Instead, I sent out an email that in essence said, “We are having a contest to see who can correct the greatest percentage of accessibility errors in their site over the next 2 weeks.” When they went to view their current standing they saw that they were ranked as the 371st most accessible site out of 385 and ranked in the bottom 10% in all categories for all sites on campus. At the end of two weeks the site in question had corrected about 27,500 errors, ranked as the 40th most accessible site out of 385, and was in the top 5% in all categories for all sites on campus. I didn’t have to ask for this group to do anything – they just did it themselves. They ended up doing quite well in the contest. After the contest was over I saw one of the directors which oversees this group who had only learned about the contest after it was over and I told him that I was co-opting his employees for my own purposes.

2. But sometimes it’s all about the beer…it’s about building relationships

Yes, we play games because we like to win, but we also play games with people because we like to be with the other people. When I started the contest I knew who several of the participants would be because I’ve gotten to know them over the years and knew they would jump all over this. The great surprise though is who else shows up to play the game. It’s like a “pick-up” game of accessibility fixes. You bring a ball to the playground and you never know who is going to want to play.

Lots of people decided to come play. Currently 65% of all of the Web sites on campus have had someone claim them as their own and look at their accessibility report. What percentage of people normally would hit “delete” or “mark as spam” when they receive an email from “the accessibility guy” saying he has an accessibility report for them? (I’m not saying that would ever happen here at NC State – it’s just a hypothetical question.)

Not all of those Web site owners decided to play in the contest, but them coming to the game and the discussions I had with them led to the next three lessons learned.

3. Have training and tutorials immediately available to users that solve the specific problems they are trying to fix

WCAG 2.0 is great, but let’s face it, not everyone was meant to have a deep meaningful relationship with the WCAG documentation. The documentation is extremely thorough, but that’s also what makes it so unapproachable for so many people. The documentation is War and Peace, but what people really want is the Cliff’s Notes version.

Most people aren’t like me. They don’t like to read about all of the nuances and possible implementations for each success criteria in WCAG or come up with creative implementations that haven’t been thought of yet. They have other jobs they want to do.

The moral of the story – don’t send people to the WCAG documentation to learn more about their errors or how to fix them. Tell them in your own words. You know what problems they are facing, so give them the solutions they need, not EVERY solution and nuance possible.

In our system when users view a list of their accessibility errors, along with that they get a brief description of the error that I wrote along with a link to a tutorial that I wrote that gets them exactly what they need. Many of the examples I use come straight out of the WCAG documentation, but I’ve streamlined the process.  As an example, when users see that they don’t have the language of the page defined, they get a link for Defining the Language of the Document

Many of these tutorials were written before the game began, but they were updated as new questions came in.

4. Different people come to the game with different skill levels, so you have to figure out how to meet people where they are

I would love to play basketball with Michael Jordan, but Michael Jordan might not like playing with me. How do you create a game space where people with varying skill levels can play?

When the game was first envisioned it was designed with Web developers in mind – people who were comfortable with viewing the source code of documents. I always knew I wanted to reach out to people beyond the traditional Web developer group, like our communications people and content creators. I thought one of my “ins” with these groups was I could also report to people how many broken links and misspelled words they had. While people might not make accessibility a top priority in their jobs, correcting misspelled words and broken links were a high priority for professionally produced content.

When I sent the communicators and content creators information about the game, I stated that they might want to forward much of this information to their Web developers. Then came the day when I taught a class on Interpreting your Accessibility Scan Results, and the only people that showed up were content creators. To them source code was a foreign language. So how do you teach about an accessibility report that talks in technical terms and gives line numbers of where errors occur? Or an even bigger problem, how do you speak to someone about an accessibility report who doesn’t even understand the concept of accessibility?

We spent the hour looking at their reports and starting to break out which errors were probably things their Web developers would have to fix and which were things that they as content creators would have to fix. Out of that discussion we were able to start generalizing probabilities of whether certain errors applied to them or applied to their developers. That information will soon make it’s way into everyone’s report, which leads us to the next lesson learned.

5. Don’t get too attached to your game design, because it probably could have been designed better and should have been designed better

Creating good games is hard and the best ones change when they need to. Did you know that in the original rules for basketball dribbling was not allowed?
original basketball rules, types on faded yellow paper

You might not get all of your rules right the first time either. Be open to suggestions that people give you for improvements. Also be very responsive with changes, especially when you have one of those moments when accessibility is getting their undivided attention.

Ever since the game went live, here are some of the suggestions that people gave that I implemented, usually within a day or two of receiving the request.

  1. ability to request rescans as often as you want them as opposed to once a quarter – this took a major overhaul of some of the back-end processing
  2. showing historical data for how a site has improved over time
  3. instead of showing line numbers where misspelled words are, show the actual misspelled words – this was not as easy as it seems given the tool we were using
  4. added distribution graphs to show where there site falls in terms of total errors

6. Automated scans miss some of the biggest accessibility problems

I knew this going in, but it was really obvious when I saw some of the results. One of our sites which did quite well in the contest still had significant accessibility problems with it that the scan didn’t pick up. There are certain aspects of accessibility that automated scans simply cannot accurately asses. For example, if a site uses a table-based layout, how do you determine if an actual data table embedded in the layout table is coded correctly? How do you even know if it’s a data table if there are no table headers, captions, or scope or summary attributes? Automated tests also fail at assessing user interaction with a Web page, like testing for true keyboard accessibility.

This automated scan doesn’t solve all of our problems, but it lets us take a step, which leads us to the next lesson.

7. Mario didn’t save the princess in world 1-1

Super Mario Brothers screen shot, with Mario standing in world 1-1

I think one of the things that often holds us back in the accessibility field is we want to make sure we say and do everything as perfectly as possible and that we don’t leave anything out. I think being accurate and complete is very important, however, if completeness becomes the thing which prevents us from making any progress then we aren’t helping anyone.

I think this problem impacts both people who are trying to create accessible content and those of us trying to help them do it. We will always find more accessibility problems to fix. We will always find a more accessible way to implement something. As teachers we will always find better ways to say something and more complete examples to give. But you have to get it out the door. I believe in release early, iterate often.

In fact one of the aspects of games is the ability to slowly improve yourself over time. When you are able to slowly yet meaningfully make progress on improving your Web site’s accessibility, that can be very rewarding. Being able to say, “two months ago I was here, but now I’m here” is powerful.

In game design I think this approach is important too. Like I said, the tool as it is now doesn’t solve all of our problems, but we’ve made meaningful steps. I can now add new features into the game to help address some of the remaining deficiencies we have. Here are some of the planned new features.

  1. There will soon be a set of manual tests that you can do and earn extra points for your score once you complete and pass the test. For example, if you do a test to see if the keyboard focus is visible at all times you will get some bonus points. If you actually make the keyboard focus visible at all times you will get even more points.
  2. I am looking at adding additional evaluation tools into the mix that are better at detecting certain errors than our current toolset is able to do.
  3. And as a teaser, I’m also looking to add in some artificial intelligence and statistical techniques to assess for some problems that there are no good ways to test for yet. Stay tuned for this one.

8. If something good comes out of the game, share it with the world

The final lesson for this round – if you had fun and something good came of the game, share it with others. We had two projects come out of our challenge that are now available to everyone.

First, we now have a new Drupal module that automatically looks for links that are coded to open in new windows and it appends some text to the link to alert the user to this. The link is hidden offscreen for screen reader users and comes into view when the link receives keyboard focus or a mouse hover.

Second, we now have a bookmarklet that lets you easily assess the reading level of published Web pages.

I’ve written a lot and could write more, but it will have to wait for the next post.

How to Use Video.js – The Big Picture

This is a high-level overview of how to use Video.js to display your videos in HTML5 pages. This assumes you know some basics about video. Using Video.js is a three step process.

  1. Creating the video in the appropriate format
  2. Creating your transcript
  3. Inserting the correct HTML code in the page

Creating the video in the appropriate format

One of the strengths of HTML5 is the ability to add video right in the Web page without any additional software, like QuickTime or Flash. However, due to licensing conflicts and turf wars, not all browsers support all video formats. Fortunately, with HTML5 video you can provide the video in multiple formats and the browser will choose the one it can play.

To get this to work there are a few things to keep in mind. At a minimum, you should provide your video as an MP4 file. This will work in all browsers except for Firefox. Unless you want your Firefox users to have to use the Video.js Flash fallback option, you should also include the video as a WebM file.

If you need help converting your video from one format into another, use the Miro Video Converter.

Creating the captions

To create captions the first thing you need to do is create a plain-text version of what is said in the video. If the video is short, like 5 minutes long, this is easy to do in a text editor. If it is longer, you might want to pay someone to create a transcript for you. The IT Accessibility Office can provide you with a list of transcription companies.

After creating the transcript you have to add in the time stamps to the file to tell when each piece of text is supposed to display on the screen. The IT Accessibility Office provides a free service to add time stamps. If you are interested, please contact the IT Accessibility Office.

Inserting the correct HTML code in the page

After you have your video file(s) and caption file, upload them to the server along with an HTML5 file with the following code embedded in it.

In the <head> section:

<link href="" rel="stylesheet">
<script src=""></script>

Somewhere in the <body> section:

<video id="my_vid_id" class="video-js vjs-default-skin" controls preload="auto" width="640" height="264" poster="poster.png" data-setup='{}'>
   <source src="movie.mp4" type='video/mp4' />
   <source src="movie.webm" type='video/webm' />
   <track kind="captions" src="captions.vtt" srclang="en" label="English" />

When this page is loaded in a Web browser, Video.js will

  1. check to see which of the video formats the browser natively supports
  2. load the Flash fallback video player to handle any browser/video format compatibility problems
  3. replace the browser’s default video controls with its video controls

A New Look-And-Feel

I’ve updated the blog with a new template. It’s one that we use on campus on several sites. I like this one better than my old one, and this way any accessibility work I do on it will also be transferred to several other sites around campus.

Accessible Video.js Player Available on Global Accessibility Awareness Day

Video.js is an HTML5 video player that makes embedding video in HTML5 pages very easy and gives you a consistent look-and-feel across browsers. Video.js will use the browser’s built-in ability to play video in the format the browser prefers, but it uses a standard set of user interface (UI) controls that work across browsers. It will also work on mobile devices.

One of the strengths of Video.js is how it solves one of the big problems with HTML5 video – browser support of codecs. If you’ve ever worked with HTML5 video you know you usually have to provide your video in at least two formats to ensure that it will work in all browsers – MP4 and WebM. With Video.js, you can still provide the video in multiple formats and it will use the browser’s built-in ability to play that video in the preferred format, however, if a browser does not support one of the formats you have provided, Video.js will use a Flash fall-back player to play the video while still using its standard set of UI controls.

Now one of the other great strengths of Video.js is the player controls are accessible. Video.js is an open source project and I’ve been working with the community to make the player more accessible. With this new version of Video.js released today, which is coincidentally Global Accessibility Awareness Day, the player is now accessible to keyboard-only users, screen reader users, and voice-interface users. The specific accessibility improvements are:

  • The UI controls are keyboard accessible, including seeing the visual focus on the controls.
  • The UI controls are named properly and the status of each is updated by the appropriate ARIA attributes, so screen reader users can fully interact with the player. It works with JAWS, NVDA, and VoiceOver. The accessible controls include the following:
    • Play/Pause Button
    • Progress Bar
    • Time Elapsed
    • Time Remaining
    • Fullscreen Toggle
    • Volume Slider
    • Mute Button
    • Caption Button
  • The tab order for the UI elements has been altered to make the flow more logical.
  • The font size of the caption track is now enlarged when viewing the video in full screen mode.

Making Video.js accessible is a work in progress and there are still a few things that need to be done.

  1. The tab order probably needs to be adjusted some more to make it more logical. To alter it any more that what it is right now will take some significant modifications to the CSS and markup.
  2. Full keyboard accessibility needs to be more robust. Because the player controls show and hide themselves based on if the mouse is hovering over the player or not, this causes problems for keyboard users. The solution isn’t as simple as just adding in the appropriate onfocus and onblur events, because the video player is made up of multiple UI elements, and bubbling up the focus through multiple elements takes some extra testing to make sure it works in all browsers. However, currently there is a little of what I call “accessibility by accident.” If you start playing the video by using the keyboard, the mouse never enters or exits the video area, thus never triggering the show/hide event. That makes the player controls stay visible all the time. The downside is that the controls cover the bottom portion of the video, albeit in a semi-transparent manner. There are, however, a couple of edge cases where the controls do end up hiding themselves. Coming up with a more robust solution will solve this problem. Playing the video in full screen mode seems to also eliminate the problematic edge cases.
  3. Currently the caption button is acting as a menu with a sub-menu. To get it to truly act the way a submenu is supposed to will require some more significant modifications to the code.
  4. The time progress bar currently increments and decrements by 5 seconds when using the left and right arrow keys. This is better than what it was before, when it only changed by 1 second. I’m not sure what the ideal amount is and am open to suggestions.
  5. Support for technologies like Dragon Naturally Speaking need to be more robust. You can currently navigate the entire UI with Dragon, but it could be made more elegant.

If users, particularly keyboard-only users or screen reader users, have difficulty interacting with the player, viewing it in full screen mode might correct some of the problems.

There was no long term plan to release this today on Global Accessibility Awareness Day – it was just serendipitous.

2013 Global Accessibility Awareness Day Challenge Results

Congratulations to all of the Web site owners who participated in NC State’s First Annual Global Accessibility Awareness Day Challenge! As a group we corrected 194,232 accessibility errors across many NC State Web sites over the past two weeks. Here is a summary of some of what we fixed.

  • 96,747 link related problems
  • 23,702 alternative text instance
  • 15,546 heading structure problems
  • 14,586 code validation errors
  • 9,963 keyboard access problems
  • 5,930 form element problems
  • 4,749 table problems
  • 1,792 language definition problems

For the full listing with the final standings, visit the NC State Global Accessibility Awareness Day Website Challenge page.
In the next blog post, I’ll discuss some lessons learned from this contest.

NC State Global Accessibility Awareness Day Website Challenge

Are you ready to put your website up to the challenge? Over the next two weeks, ending on May 9, one way NC State will be celebrating Global Accessibility Awareness Day is by having NC State’s First Annual Global Accessibility Awareness Day Website Challenge.

The Challenge is to see how many accessibility errors you can correct in your Web site by May 9. The results are based on the percentage of errors you correct according to the Accessibility Scan being done by the IT Accessibility office. Your starting point will be the first scan that was completed on your site. The final point will be the most recent scan you submit before May 9. To participate here is what you need to do.

  1. See if your site has already been scanned. If it has, on the same page request access to the data. If it has not, also on the same page submit your site for its initial scan.
  2. Between now and May 9, correct as many of the accessibility problems as you can. Only the “accessibility” errors in the report will count towards this contest.
  3. Before 11:59 PM on May 8, be sure to submit a rescan request for your site.

On May 9 fabulous accolades and digital blingage will be bestowed upon the sites in each size category that have shown the most improvement. The three size categories are 1-99 pages, 100-999 pages, and more than 1000 pages. There will be additional recognitions for sites that have corrected at least 25%, 50%, and 75% of their accessibility errors.

If you would like some additional help with your scan you can

  1. Come to the Interpreting Your Accessibility Scan Report Lunch and Learn on Monday, April 29 from 12:00-1:00 in ITTC Lab 1A in the D.H. Hill Library.
  2. Come to the Co-Working Day on May 3 from 9:00 AM – 5:00 PM in The Avent Ferry Technology Center, room 106, to ask Greg Kraus, the University IT Accessibility Coordinator, any questions you would like and get expert advice.
  3. Consult the Accessibility Handbook for techniques to create accessible Web pages.

Global Accessibility Awareness Day, which is celebrated on May 9, is a community-driven effort whose goal is to dedicate one day to raising the profile of and introducing the topic of digital (web, software, mobile app/device etc.) accessibility and people with different disabilities to the broadest audience possible.

Testing Web Page Readability

I’ve developed a readability bookmarklet that lets you easily analyze a Web page or a portion of a Web page to see how readable the text is. This tool is available to anyone in the world, not just NC State University. To get it, first visit the readability bookmakrlet page. After adding the bookmarklet to your browser, go visit a Web page, select some text, and run the bookmarklet. If you want to analyze the whole page, do not select any text before activating the bookmarklet.

The results are based on several tests that examine the number of words, sentences, and syllables to estimate how easily the text can be read. Most of these tests estimate what grade level is required to comprehend the text. The recommended levels are based on the WCAG 2 Level AAA conformance for making content readable and understandable which says that text which cannot be understood with a 9th grade education (15-16 years old) must also be presented in an alternative way.

One of the features of this tool is the ability to analyze only part of a Web page instead of an entire Web page. A lot of the text on Web pages exists as menu items or other short blurbs. If you analyze a page in its entirety, these bits of text can skew the results of a readability test because they will change the ratios of things like words per sentence.

Download Google Doc as Microsoft Office Document Tool Updated

The Download Google Doc as Microsoft Office Document Tool has been updated to handle a couple of problems.

  • The script is now delivered via SSL. Previously the script was served from a non-SSL server. Google Docs are delivered as SSL pages and some browsers do not allow insecure content (this script) to interact with secure content.
  • The script now handles spreadsheet conversion better. Sometimes the URLs for the spreadsheets have different formats depending on how they are shared and how you get to them. The script now accounts for these differences.

Because of the change to SSL, if you previously added this bookmarklet to your browser, you will need to delete the original one and add the new version of the Download Google Doc as Microsoft Office Document tool.

Previous announcement about the Google Doc Download Tool.