I get asked all the time how you can adequately test for accessibility in an agile development process – when you don’t always have access to expert assistive tech users. Let’s be clear at the start, the best way to test your applications for accessibility is to make sure that at many points in your development process you ARE testing with native assistive technology users – especially screen readers. However, not all native screen reader users can be considered, experts either.
At Blackboard, I developed a 3 step accessibility testing process that’s being done by our development teams during each two week sprint.
- Testing for technical accessibility bugs
- Testing for keyboard usability
- Testing for screen reader usability
Many development organizations are still leaving accessibility testing to the very end of the development cycle, right before releasing a complete feature or tool. This is too late. If you can test for accessibility along the way, like you test for all other elements, you can save a lot of time, money, and frustration trying to remediate accessibility issues after you’ve gone to market. Let’s talk about each of these steps in more detail now.
Step One: Technical accessibility testing
Getting the technical aspects of accessibility correct is actually the simplest part of the process. There are many tools out there that can help you, some are free and some are not. The go to free tools that my teams use are WAVE (from WebAIM) and AXE (from Deque Systems). As you start your in sprint testing, run these tools on every page within the feature you’re building. They will find any technical issues in your code. My recommendation is to use WAVE for static pages, and AXE for anything more complicated than static content. Deque has also provided a set of open APIs for AXE that can be incorporated directly into your unit testing framework and run automatically along side any other automated testing you’re executing.
Before you move on to step 2, fix ALL bugs found by your technical or automated accessibility testing. If the technical accessibility testing is lacking, you’ll have a much harder time assessing the usability of your system in the subsequent steps.
Step Two: Keyboard Usability
Keyboard usability is the best way to start usability testing for accessibility. Keyboards are only one form of input device used by people with disabilities. It’s the most common one, and often also used by power users who are just looking for more efficiency in navigating your application. If you spend time making sure that your system works well for keyboard users, and you’ve fixed all the technical issues you found in step one, you’re well on your way to ensuring your system works with other forms of input devices like foot pedals, eye trackers, voice recognition software, and many others.
Keyboard and alternative input device users depend on shortcuts to navigate, many interactions rely on common patterns, but occasionally you need to build your own custom shortcuts for your application. When you do that, make sure you’ve investigated other shortcuts in common browsers and assistive technology tools to make sure that you’re not adding shortcuts that will conflict with these tools.
Here are some quick tips for getting the most out of your keyboard testing:
- Don’t cheat. Unplug your mouse and put it out of reach.
- Print out cheat sheets for some of the more common application shortcuts like those for Firefox, Chrome, Safari
- Use the browser find functions to jump to different sections of the page
- Check the tab order, make sure focus is always trapped in the active layer of the application and focus is not being sent to non-interactive elements.
- Be sure to access ALL of the controls on the page and make sure you can activate or interact with all of them using your keyboard.
- Try ALL custom keyboard shortcuts you developed in your application. Try them across multiple browsers.
Before you move on to step three, fix ALL bugs you found in your keyboard testing. If your keyboard interaction is lacking, or not working consistently, you will have a significantly harder time testing with a screen reader.
Step Three: Screen reader testing
JAWS (built by Freedom Scientific) is still the most common screen reader in use, according to the results of the most recent WebAIM Screen reader survey but usage of VoiceOver (the built in screen reader in OSX and iOS) and NVDA is definitely on the rise. Approximately 58% of screen reader users self identify as being advanced users of the tool. Most are self taught and figured it out by trial and error.
The key thing for internal testers to understand is that there are no “typical” screen reader users. Some navigate using the “headings” and “links” list tools built into their screen reader to navigate. Many rely on the quick keys like B for finding buttons, H for finding headings, T for tables, etc. Most use a combinations of techniques depending on the task at hand. No screen reader user simply tabs around the page to find content of information as their first approach.
Being successful testing the usability of your application depends on both an understanding of how these users interact with these tools and how the tools themselves work. Here are a few tips for successfully testing with a screen reader.
- Don’t just tab around the page, navigate how a non-visual user would.
- Use the landmarks, headings, links, and buttons quick links or dialogs to move around the page. Use reading commands to read text that’s in close proximity to headings and links.
- Listen for audio cues when you move between reading mode and input modes. Ideally this only changes when you’ve encountered a form. It can occasionally happen when accessing custom controls. If hear this happening in many places in the application, you may need to rethink how the control has been built.
- Check pages in both desktop and mobile screen readers as users will use both.
- Challenge yourself and either turn off your monitor or increase the speed of speech as you become more familiar with the tools.
Before you can call your testing complete, you need to fix all the issues you found in your screen reader testing. Doing so will ensure that the product you go to market with has a high level of accessibility quality. It will also mean you’ll find less issues in the system after it’s been shipped.
This complete testing process should take anywhere from 10 – 30 minutes to complete if you’re doing it every sprint. The elements you are testing are the small digestible chunks that have been targeted for the sprint. If the development of a feature spans multiple sprints you can do these smaller in sprint tests, but be sure to follow it up with a comprehensive end to end test of the full feature before you ship it.