Pop quiz! If I mention the phrase “assistive technology” what’s the first thing you think of?
For me the answer would be a screen reader which nicely illustrates a point Ben Cubbon made in a previous article on this blog, Accessibility at OVO: “often, when accessibility is talked about, people who have a visual impairment are thought of first.” It’s not very surprising that the same applies when it comes to thinking about assistive technology (AT).
Like a lot of developers the first thing I did when I started to take accessibility testing seriously was learn how to use a screen reader. Freely available and easy to use, it was a natural place to start and, if I’m being honest, to stop. Other than using a screen reader and keyboard-only navigation I rely on automated accessibility testing tools.
I’m not going to be too hard on myself though because AT devices are highly specialised, expensive, and not easily accessible to people who don’t need them. Aside from the cost, they often require a great deal of skill and practice to use. Most of us could get hold of a refreshable braille display, but learning how to read braille would be another thing altogether.
But even if it’s not practical to get our hands (or heads, or mouths) on the devices themselves, we should still take the time to learn what AT devices are capable of and how they work. As user experience practitioners and software engineers our job is to make sure that the products we build provide the best experience for our users, and to do that we need to understand the technology they use.
In this article I’m going to discuss three of the more commonly used type of AT device:
I’ll describe how they are used, and suggest some things we should keep in mind when considering the people who use them. We may not be able to test on the AT devices themselves but, by learning about them and watching people use them to operate computers and navigate the web, we are putting ourselves in a better position to build genuinely accessible products.
Refreshable braille display
What is it?
A refreshable braille display (RBD) allows braille users to use computers, mobiles, tablets and other electronic devices by translating text output to braille, and allowing them to type with a braille keyboard.
How does it work?
There are two parts to an RBD. The first is the braille display, which works by raising and lowering pins within a braille cell (six or eight dots arranged in two columns) to form the pattern for each letter. The second part is the braille keyboard, which is used for typing. This has at least 6 pads—3 for each hand—each of which corresponds to a dot in a braille cell. There is often an extra pad on each side for backspace and enter.
There are many types of RBD including small battery powered devices designed to be used with phones and tablets, and most models feature additional controls such as thumb switches, space bars and joysticks. Some models will map all of the functions of a QWERTY keyboard to the braille keyboard.
How can we support RBD users?
Provide language information and make our content translatable
Braille is not universal. Languages have one or more braille systems and each system can have one or more codes. Most English braille users will use contracted Universal English Braille (UEB), which uses special signs for frequently used words or groups of letters, as well as patterns for each letter of the alphabet and commonly used symbols and punctuation. This means that an RBD is effectively translating our text content from English to UEB.
Make our content concise
Even larger RBDs only allow a user to read a short section of text at a time. As well as this, information such as paragraphs and line breaks is lost because an RBD only provides a single line of text.
Make our software easy to navigate
Unlike users who are using a keyboard and monitor, RBD users a single device to read and write. Some users will use an RBD alongside a screen reader, but this isn’t always practical or desirable.
I found this quite a difficult concept to grasp so it might help to watch this video demonstrating a Humanware Brailliant. The easier our software is to navigate and use, the easier it will be for an RBD user to read and write efficiently.
Eye gaze device
In the video above, former BMX champion Stephen Murray describes the positive impact eye gaze technology has had on his life.
What is it?
An eye gaze device allows people with very limited mobility to control a computer, tablet or other device (such as a games console) by tracking where they are looking.
How does it work?
An eye gaze device uses lights and cameras to pick up reflections from a user’s eyes, and then uses the movement of their eyes to control a cursor on a screen. A user can interact with an application by blinking or holding their gaze in a fixed position for a period of time (‘dwell’). Dwell can be used to scroll a page or simulate a mouse click. Users with some hand mobility may choose to use an eye gaze device alongside a physical switch.
To type, a user will either blink or dwell on an onscreen keyboard, or use augmentative and alternative communication (AAC) software such as a communication grid set.
How can we support eye gaze users?
Clearly identify inputs, buttons and other controls
Text links, buttons and other controls should be easy to identify. We also need to consider the clickable area of buttons and other interactive elements. This is not as simple as making the buttons extra-large: eye gaze devices work best with a maximum screen size of 27 inches, so we need to consider the amount of screen space being occupied by a control.
(Although beyond the scope of this article, designing for the eye is a fascinating topic. If you’re interested in it I can recommend reading Designing for the Eye – Design Parameters for Dwell in Gaze Interaction.)
Make sure our software can be controlled without a keyboard
We often talk about designing software that can be used without a mouse, but for someone using an eye gaze device they are effectively controlling software without a keyboard. A good exercise is to imagine filling out a form using a mouse to type each letter (hopefully leading you to the conclusion that the fewer fields a form has the better!).
We should also consider the length of our content and the positioning of buttons on a screen. We shouldn't be asking users to scroll to the bottom of a long page to click a button or tick a checkbox.
A user with reduced mobility typing on a keyboard fitted with a keyguard.
What is it?
A keyguard is a cover which fits over a keyboard or touch screen device to help users with reduced motor control type accurately.
How does it work?
A keyguard sits over a keyboard and reduces the accessible area of the keys by providing a hole above each once. This requires a more precise movement from the user to press a key, and makes it harder to press the wrong key. It also allows a user to rest their hand on the keyboard without pressing any of the keys.
Unlike RBDs and eye tracking devices, keyguards are inexpensive and simple to use, which is one of the reasons why they are among the more widely used AT devices. They are often used with adaptive keyboards, for example models with large buttons, but they are available for most standard keyboards (there is a keyguard available for the keyboard I’m typing on right now).
How can we support keyguard users?
Our approach to keyguard users is different to RBD and eye gaze users. In those cases we are considering how to make our products accessible to people using a specific device, but with a keyguard user we should be thinking more widely about supporting users with limited mobility.
Limit the amount of typing we ask our users to do
We should ask our users to do as little typing as possible because using a keyguard slows down typing speed. Good keyboard navigation will help, as will using the correct type of form field, and using as few form fields of as possible.
Make sure our software can be easily controlled without a mouse
It is likely that a keyguard user will also find it difficult to use a mouse. Testing without a mouse is probably one of the more familiar accessibility testing techniques, but it’s worth considering it from the perspective of users with limited mobility.
Specifying accesskeys is a good accessibility feature, but using them requires the user to press and hold multiple keys at the same time which will be difficult for a user with reduced motor control. If we do use accesskeys we should make sure that we are also providing the functionality in another way.
Similarly, making sure the tab key works as expected is important for all users, but we should reduce the number of key presses needed to navigate the site. We can do this by using skip links where appropriate, or simply by keeping the number of buttons and links on a page to a minimum.
Want to know more?
Although I’ve barely scratched the surface of AT, I hope it’s helped to give you a new perspective on accessibility. I also hope I’ve made it clear that considering AT users doesn’t require a great deal of extra work. Many of the things I’ve discussed should follow naturally from building accessible software. It’s not just AT users who will thank you for writing concise content, designing software that is easy to use, and building websites which are easy to navigate.
If you would like to know more, here are links to some of the content I used while researching this post:
- An overview of Braille Devices (Perkins Learning)
- Assistive technology 101: what you need to know (Bureau of Internet Accessibility)
- Assistive technology basics (Understood)
- Designing Websites for Motor-Skill Disabilities (WeCo)
- Eye-gaze control technology (Cerebral Palsy Alliance)
- Keyboards for People with Disabilities (Better Living Through Technology)
- Reading with refreshable braille displays and iPads (RNIB Bookshare)
- Use of Eye Movement Gestures for Web Browsing (Clemson University)