Skip to content

Top 10 mistakes new AX developers make

Posted in Accessibility

In today’s world more and more attention is being placed on creating technology that provides equivalent and equitable experiences for all types of users. But it’s not as easy to do as it sounds. After working as a designer in software development for over 10 years, I believe that developers are the foundation for how things truly work. They execute a vision that was handed off by us designers, but developers are the key to bringing it to life. To do this for diverse users, an understanding of coding best practices related to accessibility is key. But, accessible development is still a specialized skill set. As with learning anything new, there are several easily misunderstood, or misapplied, concepts that can trip up any developer getting started with accessible coding. This post is intended to alleviate some of the most common gotchas.

But before I talk about coding best practices I want to start by helping you understand a little more about how people with disabilities interact with technology. Many rely on a variety of assistive tools to interact with digital environments. But there are really two major classifications of assistive technology that developers need to understand: input devices and screen readers.

" "There are a variety of input devices that could be used by people with limited mobility. They range from keyboard and voice recognition tools, to foot pedals and mouth sticks. All of them are intended to assist a user in moving around the interface and interacting with the elements of your site. But coding for all of them is the same.

""There are two distinct types of screen readers. Those used for navigation and interaction (as well as reading text) and those used solely for reading and comprehension. Developers need to understand the differences between them but most of the heavy lifting in development will fall under the support for screen readers that help someone navigate.

Screen reader tools such as TextHelp, ReadSpeaker, or Kurzweil 3000 are used for reading and comprehension and are generally used by sighted individuals with cognitive disabilities who need the content to be read aloud to them. These tools help them internalize and understand the information through auditory means. There are a number of additional features within most of these tools, but for our purposes today, there isn’t much else you need to know.

Screen readers that assist with both reading and navigation are generally used by individuals with significant vision loss or total blindness. These tools (like JAWS, VoiceOver, and NVDA) allow non-visual users to understand the structure and content within your application as well interact with the elements of the interface. Many of these tools have features built within them that help individuals interact with technology. For example, they have the ability to quickly understand the structure of a page; view a list of links or buttons; move quickly between headings; and many more. The availability of these features within the screen reader means non-visual people interact with technology in very different ways than visual people do.

Now we’ve come to the meat of this post: what all of this means for developers and what they need to do to build full accessibility support into anything they create. There is a lot of information conveyed below but here is the list of the top 10 mistakes, misunderstandings, and misconceptions I’ve encountered.

  1. You can skip heading levels or use custom HTML elements as they have no impact on accessibility.
  2. Landmarks don’t really provide any value. What are they anyway?
  3. Buttons and links are interchangeable.
  4. Focus is controlled by the browser. There is nothing extra that needs to be done.
  5. To control focus or make a screen reader read static content you need to add a tab index.
  6. Menus and site navigation are the same thing.
  7. Tabs are one of the easiest ways to organize information for all users.
  8. Dynamic page changes are announced to screen readers automatically.
  9. There is no danger in building custom controls as long as you add keyboard support.
  10. Support for accessibility is the responsibility of the developers implementing the design.

The rest of this post will provide more details about each area developers need to consider to build truly accessible websites and applications.

1. Semantic Structure

One of the most basic misconceptions is that you can build custom HTML elements or skip over semantic structures like heading levels without impacting accessibility. Sorry to say, not doing this correctly can have a pretty big impact on people’s experience with your site.

But this is really more about good quality code than accessibility. Good semantics always matter. HTML has a number of elements which contain expected behaviours, styles, etc. This could include things like layout containers, tables, links, buttons, etc. But what creates the most challenges for non-visual users is a lack of headings.

Non-visual users rely on headings to help them understand the nested structure of content in a page. They rely on headings being logically and consistently used to help them gain context and awareness throughout the application. When headings aren’t present, there is no way for a non-visual user to get an overview of your content and it becomes the dreaded wall of text. When heading levels are skipped or used erratically, non-visual users can have a hard time understanding the flow of your content.

Best practiceMake good use of Headings to convey structure of CONTENT and use CSS to create the visual styles that you want.

Avoid DoingDon’t skip heading levels. But if you do, do it consistently.

Avoid DoingAvoid using headings to convey system layout.

2. Landmarks

I often get asked by developers “what the heck are landmarks and why are they useful. I’ve never seen anything in code called a “landmark”. Well, they are right about that part. Landmarks are really just specialized ARIA roles.

Side note: ARIA stands for Accessible Rich Internet Applications and it’s a set of roles, attributes, and states applied to HTML objects to help assistive technology users interact with your application more easily.

Landmarks are an ARIA construct that allows developers to create semantically valid regions on a page. These regions help support screen reader navigation and provide structure within a page or application. They are complimentary to headings. Landmarks focus on the layout and types of information within the page. Headings provide structure to the content.

There are 8 roles in ARIA that are designated as landmarks.

  • Banner
  • Main
  • Navigation
  • Contentinfo
  • Complementary
  • Form
  • Search

There are also 6 default elements in HTML 5 that automatically map to these landmarks which can save a developer a bit of coding time if using HTML 5:

  • Header (maps to banner)
  • Main
  • Nav
  • Footer (maps to contentinfo)
  • Aside
  • Form

Landmarks can be applied to any generic element or custom control to expose accessibility information, but the most common misconception, is that you can create your own landmarks.

Best practiceStick to the standard roles to ensure screen reader users can properly interact with your application.

Use with cautionIf you need to create something not provided in the list above, use <role=region>. Add a unique label so the screen reader can differentiate them.

The last of the semantic elements in our top 10 list is the concept of buttons and links. Strangely, there is a lot of confusion among developers about when something should be a link and when it should be a button.

Here’s the simplest break down:

  • A button is used when you’re executing some action on a page and NOT changing the context.
  • A link is used when you’re changing context – like opening another page or layer in the application.

There are a couple of reasons this matters.

Non-visual users have developed mental models about what each of these elements do. Their screen readers will read out WHAT it is (a link or a button) and that allows them to make an assumption about what will happen. When a button navigates them to a new page, it can be jarring and confusing.

Sighted users not using a mouse SEE something that looks like a button or a link and they have a preconceived understanding of how to activate it.

  • Pressing SPACE or saying “Click BUTTON” to activate something that looks like a button.
  • Pressing ENTER or saying “Click LINK” to activate something that looks like a link.

When these don’t work as expected, you’re increasing the amount of effort it takes people to use your application.

Best practiceIf the element takes the user to another context, it’s a link. If it executes an action on a page, like saving data on a form, it’s a button.

Use with cautionIf you really need to break this rule to meet design objectives, do it cautiously and be consistent across your application.

4. Focus Management

Let’s move now into Focus Management. It’s critical for both non-visual users, and users who cannot use a mouse, that focus is never dropped or unexpectedly moved. Sometimes focus can be controlled by default when pages refresh, but more often you need to programmatically control it.

Controlling focus is the key to helping users understand where they are at all times. The goal is to always put the user in an appropriate place when the screen changes.

For keyboard users, it’s best to send them to the first interactive element in the new context when you’re moving them between pages or layers. But for screen reader users, they may need more context, as the first interactive element may come AFTER the page heading, or descriptive information they need to understand where they are.

A general best practice is to send focus to the first heading or div in the new context to ensure non-visual users know where they are. But you also need to make sure this element does not end up in the tab order and create clutter for keyboard only users. We’ll talk more about how to do that in the next section.

One final, but critical element of controlling focus is returning the user to the right place in the previous context when they’ve closed a layer or moved backwards in the navigation. This is especially important when you’re working with modal dialogs or layers in a single page application. You also need to ensure you’re trapping focus in the active layer for the duration it’s open. One of the most frustrating things for these users is to lose their place by having focus drop back into the background layer.

Best practiceControlling focus for screen reader users and keyboard only users can be a bit different. That’s ok.

Best practiceYou must trap focus in the active layer and return the user to the same element they came from.

Use with cautionAvoid sending focus to a non-interactive element. If you do make sure it’s excluded from the tab order.

5. Tab Index

Ok. We just talked about controlling focus. One of the most common ways to set a focus target is with the tab index attribute. But it’s also something to be careful with. Standard interactive elements (like links, buttons, and menus) are automatically added into the tab order so you don’t need to add a tab index attribute. But, tab index can be added to any element. One of the most common misunderstandings is that you need to add <tabindex=0> to non-interactive elements in order for screen reader users to be able to read the text. That’s not true.

Screen reader users have other ways of finding and reading text that’s in proximity to other elements they can navigate to (like headings, tables, links, buttons, etc). All you gain by adding <tabindex=0> to non-interactive elements is extra tab stops for sighted keyboard users, which quickly becomes annoying.

If you’re programmatically sending the user to a non-interactive element (to control focus and ensure an awareness of context) add <tabindex=-1> rather than <tabindex=0>. This will allow focus to be sent there, providing context to screen reader users, but it will keep it OUT of the tab order and reduce the clutter for sighted keyboard users.

Best practiceUse <tabindex=-1> if you need to send focus to a non-interactive element.

Avoid DoingDon’t clutter the keyboard interaction by adding <tabindex=0> on static elements in the page.

The biggest gotcha with menus, is using them to build primary site or application navigation elements. These are technically not menus.

Menus are widgets that offer a list of choices to a user, such as a set of actions or functions. They contain “menu items” and can be contained in “menu bars” to create more robust constructs. Menu bars are generally persistent visible elements that are presented horizontally and intended to mimic the action menus in many desktop applications.

The mental models for interacting with menus have been around for a long time and are pretty well understood. Once the menu has focus, use the arrow keys to move around it. Use up/down to move within the menu items. In a menu bar, right and left cycle you to the next parent menu or a sub-menu of the item in focus if appropriate. Escape will close or exit the menu. Tab will move you to the next interactive element on the page – after the menu.

Navigation constructs are not technically menus. The are more accurately organized groups of links. This distinction goes back to the differentiation between buttons and links – like buttons, menus are about taking action in the same context. Navigation structures are about just that, navigating to a new context.

Rather than forcing these elements into a menu construct, wrap them in a navigation landmark and use aria roles and properties to control the interaction amongst the groups.

Best practiceGroup sets of actions together into menus or menu bars.

Best practiceUse <role=navigation> or the HTML <nav> element to create site navigation groupings.

Avoid DoingDon’t force a site navigation structure into a menu, even if it seems simpler to implement.

7. Tabs

There is an on-going debate in web accessibility circles about whether or not tabs are a useful construct at all due to the complexity and confusion they often incur for screen reader users. But they continue to be used, so let’s dig into how to use them properly.

Tab lists and panels are used to organize information into separate sections within the same page. If you have a design that seems to suggest the need for a tab panel, review the full interaction model for tab panels. Some of the critical elements of tabs and tab lists include:

  • A tab list needs to contain at least two tabs and tab panels
  • At least one tab needs to be selected at all times
  • Only content from selected tab is available to screen reader
  • Moving between tabs does not change the page, only some content within the page
  • The tab list should be treated as a single interactive element
    • Arrow keys are used to move between the tabs.
    • Use SPACE key interaction to load a content within a tab
    • The next tab press on the keyboard should move to the next interactive element on the page, which may be within the tab, or further down the page.

Best practiceReview the intended interaction and only implement tabs if you are sure you can commit to the entire pattern.

Use with cautionIf you can’t commit to using the entire pattern for tab lists use something else. Alternatives will be based on design objectives. Options might include: accordions, pages & panels, or navigation elements.

8. Live Regions

The idea of changing content dynamically on a web page or within an application has been around so long it’s become the standard. The challenge is, assistive technology – like screen readers – isn’t able to detect when these dynamic changes are made. They need to be programmatically told. Which brings us to live regions.

Live regions are an HTML container that screen readers can subscribe to. They provide a method for announcing changes in a text context without needing to steal focus from the user’s current interaction. The simplest use of live regions is to assign a passive or assertive attribute to the content being sent to the container.

When <aria-live=passive> the screen reader will wait for a pause in the user’s interaction before announcing the content in the live region. This is the most common, and least intrusive way of alerting a user to dynamic changes in content within a page or application.

If you cannot wait for the user to pause, you can use <aria-live=assertive> which will interrupt the user’s current interaction and read the content of the live region immediately. This method is most useful for errors or important system alerts. But it should be used sparingly.

As modern web development evolves, some common patterns are emerging around when and how to inform non-visual users about dynamic content changes. For these use cases, some pre-defined roles are being defined for live regions. A few of the most common include:

  • Log: used to inform the user of logged content such as chat message, non-critical errors, game interactions, etc.
  • Status: used to present information in a persistent status bar or regular updates on the page.
  • Alert: most commonly used for serious error messages.

Live regions can be incredibly useful, but you should be thoughtful in their use so as not to overwhelm a non-visual user with information they may not need. When they are used too often in a short period of time, they have a tendency to confuse the screen readers.

Best practiceUse live regions to communicate dynamic changes on the page to non-visual users.

Best practiceUse pre-defined live region roles to provide standard updates like error logs, status, and alerts.

Avoid DoingDon’t actively interrupt the user’s current task for every update.

9. Application Mode

When designers give developers complex and challenging interactions, it can seem simpler to just go and build custom widgets to handle them. But this can cause serious problems for screen reader users, and often keyboard users as well if every scenario is not considered. Enter “Application Mode”.

Application mode is invoked automatically when a user focuses on a form element in a web page. This change in context is tightly controlled by the screen reader and expected by non-visual users to enable them to complete a form. Application mode can also be invoked by adding <role=application> to any element.

The intent of application mode is to allow the web tool to take control of the keyboard interactions & force them to behave in a way more inline with desktop applications. Many developers are introduced to application mode when heading down the path of custom widget development. Theoretically, this can be quite useful when you have a custom construct that you’re creating a non-standard keyboard interaction for. But this is a VERY advanced technique that should be used with considerable caution.

Application mode steals control of the keyboard and prevents a screen reader user from using their standard interactions within their screen reader. When not handled carefully the constantly changing context can cause enough confusion for the screen reader as to render your tool completely unusable.

Best practiceStandard elements are always best.

There are very view interactions that cannot be made accessible using standard HTML elements and ARIA roles and attributes. With some creative thought and combination of elements it’s possible to build a very complicated web application without ever using <role=application>.

Best practiceReview every creative option available. Use <role=application> only as a last resort.

If you’ve attempted all other alternatives and really need to use application mode to achieve your goal there are a few key things to consider:

  • If you have multiple custom constructs, every instance of <role=application> will be listed in the page landmarks
  • They cannot be distinguished from one another in the list of landmarks
  • They often create serious confusion and conflicting commands for the screen readers

Avoid DoingYou should never put <role=application> on the body of a page unless the entire page is a custom widget that does not use ANY standard HTML elements.

Application mode is very challenging to implement correctly as you need to consider EVERY possible keyboard interaction and build a custom navigation pattern that considers every scenario.

10.Design Choices

And the last but certainly not the least of the misconceptions is who’s responsibility accessibility really is. Many design and development teams still believe that accessibility is an implementation challenge only. While a significant portion of building accessible applications is in the implementation, many things can be caught early if questions are raised in design reviews.

Some of the most common challenges that can be caught before implementation begins are:

  • Color contrast issues
  • Forms without labelled fields
  • Text inputs without borders
  • Images without alternative text
  • Links that look like buttons and vice versa

Best practiceDevelopers should feel empowered to question designs that include these common accessibility mistakes.

Best practiceAlways think through how you would need to implement a proposed design and ask for clarification of the intent if you’re unsure of how to make it work.

So there you have it, the 10 most common mistakes, misconceptions, and misunderstandings I see developers making when they are just getting started on building accessible applications. If you’ve encountered others, please share them below.

One Comment

  1. Howdy! Someone in my Myspace group shared this site with us
    so I came to give it a look. I’m definitely loving the information. I’m
    book-marking and will be tweeting this to my followers!
    Superb blog and terrific design.

    December 21, 2016
    |Reply

Leave a Reply

Your email address will not be published. Required fields are marked *