How to validate accessibility in multiple locales
By Ashok Kumar YadavSenior Software Engineer and Accessibility Architect
Validating accessibility is quite difficult on a single site. In 15 or more regions, it becomes a system problem.
Most teams treat localization and accessibility as separate workflows. Localization handles translation, date formats, and currency. Accessibility handles contrast, keyboard navigation, and screen reader support. The two rarely speak to each other, and that gap is where failures hide.
This guide covers how to build a validation approach that treats both as a shared concern from the beginning.
Understand what changes between regions
Before configuring the tools, it’s helpful to know which accessibility properties are actually locale-sensitive.
Some are obvious. Text length changes with translation: a German string can be 30% longer than its English equivalent, breaking fixed-width containers and truncating content that screen readers depend on. Right-to-left (RTL) languages, such as Arabic and Hebrew, reverse the visual and logical order of the page. A component that passes English keyboard navigation checks may fail in an RTL context if directory attributes are missing or incorrectly scoped.
Others are less obvious. Number and date formats affect how assistive technology reads values aloud. A date written as 04/05/2024 is read differently depending on local conventions, and if the underlying markup does not include machine-readable formatting via the time element, screen readers may announce it ambiguously. Currency symbols, telephone number formats, and units of measurement carry similar risks.
Start by auditing your content types (not just your components) and mapping which ones have specific local rendering behavior.
Build a local array
A locale matrix is a structured list of supported locales along with the specific accessibility risks each presents. It doesn’t have to be complex. A spreadsheet or markdown table works.
For each location, take into account:
– Direction of the text (from left to right or from right to left).
– Type of writing (Latin, CJK, Arabic, Devanagari, etc.).
– Known variation in translation length relative to the source language.
– Any regulatory requirements specific to that region (for example, the European Accessibility Law, which applies in all EU member states).
This matrix becomes the basis for deciding which places need manual testing, which can be covered by automation, and where your highest risk area is.
Automate what you can and know its limits
Automated tools like Deque Ax, IBM Equal Access Checker, and Lighthouse detect a reliable subset of problems: missing alt attributes, insufficient color contrast, form labels, header order. These checks are locale agnostic in most cases, meaning they can be run in your CI/CD pipeline against every locale variant without manual intervention.
The practical approach is to run automatic checks on a representative URL for each location in each deployment. You can do this with a script that loops through your locale list and passes each URL to the testing tool of your choice.
Example: Ax + Node.js local loop
const { AxePuppeteer } = require(‘@axe-core/puppeteer’);
const puppeteer = require(‘puppeteer’);
const locals = (‘en-US’, ‘de-DE’, ‘ar-AE’, ‘ja-JP’);
URLbase constant = ‘
(asynchronous () => {
const navigator = await puppeteer.launch();
for (constant locale localization) {
const page = await browser.newPage();
waiting for page.goto${baseUrl}/${locale}/);
const results = await new AxePuppeteer(page).analyze();
console.log${locale}: ${results.violations.length} violations);
results.violations.forEach(v =>
console.log - ${v.id}: ${v.description})
);
wait page.close();
}
wait browser.close();
})();
What automation won’t catch: focus management issues introduced by the RTL layout, quality of screen reader ads for translated strings, or touch target issues caused by text expansion. These require manual testing.
Configure manual tests by local level
Not all locations need the same level of manual testing. Group your locations into tiers based on traffic, regulatory exposure, and accessibility risk profile.
A practical three-tier structure:
Level 1: Full manual testing.
Your busiest locations and any location in a region with active accessibility regulation. Try real assistive technology: NVDA or JAWS on Windows, VoiceOver on macOS and iOS, TalkBack on Android.
Level 2: Guided manual testing.
Mid-level locales where specific components known to vary by locale are tested: date pickers, form validation messages, navigation patterns.
Level 3: Automated only.
Lower traffic locations where automated checks are run on every deployment, with manual testing triggered only when violations are detected.
Test right-to-left layouts explicitly
RTL validation deserves its own step in your process because the failure modes are different from left-to-right testing.
Check these specifically:
– The dir=”rtl” attribute is set on the html element (not just individual elements) and is controlled by the locale, not hardcoded.
– The focus order follows the visual reading order of the page. Keyboard navigation that moves from left to right in English should move from right to left in Arabic.
– Icons that imply direction (arrows, progress indicators, breadcrumb separators) are reflected appropriately.
– Modal dialogs and drawers open from the correct side.
– Form error messages appear next to the corresponding field in the correct position for the text direction.
Test RTL locales with a native or fluent speaker when possible. Problems that appear correct visually may be semantically incorrect in ways that only emerge during actual use.
Handle text expansion proactively
Translating into German, Finnish, or Portuguese can increase the string length by 20-40% compared to the original English content. This causes layout truncation, overflow, and collapse, all of which can hide or break accessible elements.
Two approaches that work in practice:
Pseudolocation — replace source strings with extended placeholder text during development, before actual translations exist. Tools like the pseudolocale npm package generate strings 30% to 40% longer than the original ones, using characters outside the Latin alphabet, which exposes design and coding problems from the beginning.
Component length restrictions — define the maximum number of characters in your component contracts and share them with your localization team. When translations exceed the limit, the component must handle the overflow correctly by wrapping instead of truncating, and the overflow behavior must be tested for accessibility.
Create a shared defect taxonomy
When you encounter accessibility issues in multiple locations, consistently categorizing them makes patterns visible. Without a shared taxonomy, the same underlying component defect is logged fifteen times as fifteen separate errors.
A simple taxonomy:
– **Component level: **A defect in the shared component itself that affects all locales that use it.
– **Locale-specific: **A defect that only appears in one locale due to the translation content, text direction, or locale formatting.
– **Integration: **A defect where the component is correct, but the local context breaks it; For example, a correct date component receives a malformed date string from the localization layer.
Labeling defects this way allows you to route fixes correctly. Component-level defects go to the design system team. Locality-specific defects go to the localization team. Integration defects require both.
Document your validation coverage
At the end of each release cycle, generate a coverage report that shows:
– What locations were tested?
– What level of testing was applied?
– How many violations were found and resolved per location?
– Any known open issues and their associated risk level.
This document gives stakeholders an honest view of where gaps exist, which is more useful than a blanket accessibility statement. It also creates a baseline that makes regressions visible over time.
Accessibility validation in many locales is not a one-time audit. It is an iterative process that improves as the tools, taxonomy, and knowledge of the team mature. Starting with automation in your process and a clear manual testing tiering structure gives you a foundation that scales, even as the number of locations grows.
Ashok Kumar Yadav, Senior Member, IEEE, Member, IAAP
** **




