Tuning a traditional radio is a simple and incredibly rich interaction. As we turn the dial, we hear snippets of sound as the stations come and go. We navigate by the programs on air, picking up not only the subject matter but the tone of each, based on even a fraction of a second of sound. Like safe-breakers listening for the pins in the lock to fall, we’re immersed in the medium itself, feeling our way by sound.
Yet somehow, digital broadcasting has abandoned this simplicity and directness. In most digital radios, station names are displayed one at a time with no sense of a location. Digital buffering means that there is a delay between arriving at the station and actually hearing it. This is clumsy enough, but if you are visually impaired and cannot read the display, it quickly becomes a frustrating game of trial and error to browse what is on—– which does seem rather ironic, given that radio is perhaps the most fundamentally accessible medium for those who have trouble seeing.
But might the obvious way of applying universal design be the wrong way? The textbook answer would be to add redundant information, replicating visual information with, say, an audible speech synthesis to speak aloud the station names. How many extra buttons, icons, and menus have we just added? It’s not technology that’s holding us back—–it’s a lack of imagination.