Building Accessible Micro-Frontends: Lessons from 5 Years in EdTech
Accessibility in a micro-frontend architecture isn't just harder — it's a fundamentally different problem. When your application is composed of independently deployed fragments from different teams, the usual "add some ARIA labels" approach falls apart fast.
Over five years working in EdTech, I've helped drive WCAG 2.2 AA compliance across a federated ecosystem of micro-frontends. These are the patterns that actually worked.
The Unique Challenges
Focus Management Across Boundaries
In a traditional SPA, focus management is already tricky. In a micro-frontend architecture, it's exponential. When a user navigates between fragments — say, from a dashboard (Team A's MFE) to a detail view (Team B's MFE) — who owns the focus?
We solved this with a shared focus management contract: every MFE exposes a focusRoot() method that the shell application calls during navigation. This ensures focus always lands predictably, regardless of which team built the destination.
// Shared contract every MFE implements
interface MfeAccessibilityContract {
focusRoot: () => void;
announceRouteChange: (description: string) => void;
}Heading Hierarchy Without Coordination
Each MFE naturally wants to start with an h1. When composed together, you end up with four h1 tags on a single page. Screen reader users lose all structural context.
Our solution: the shell provides the page-level h1, and each MFE starts at h2. We enforce this through a shared ESLint rule that flags h1 usage within MFE packages.
Colour Contrast in a Design System
When multiple teams consume your design tokens, a single contrast failure in the token set multiplies across every surface. We built automated contrast checking into our token pipeline — every colour pairing is validated against WCAG 2.2 thresholds at build time, not just at design review.
What Actually Worked
1. axe-core in Every CI Pipeline
This was the single highest-impact change. Every MFE runs @axe-core/playwright in CI. Zero violations is the merge requirement. It caught roughly 60% of our accessibility issues before they ever reached QA.
2. Shared Component Library with Baked-In a11y
Instead of documenting accessibility requirements and hoping teams follow them, we built them into the components themselves. Every form input in our shared library includes proper labelling, error association via aria-describedby, and keyboard handling. Teams literally cannot build an inaccessible form if they use the shared components.
3. Regular Screen Reader Testing
Automated tools catch the mechanical issues. They don't catch "this experience is confusing when you can't see the screen." We schedule quarterly VoiceOver and NVDA testing sessions with our QA team, and the findings always surprise us.
The Results
- Zero critical accessibility violations sustained across the platform over an extended period
- 4.5:1 minimum contrast ratio enforced at the design token level
- Focus management contract adopted across all MFE teams
- Significant drop in QA accessibility bugs after CI integration
The Short Version
If teams have to do extra work to be accessible, they won't be — bake it into shared components and automated checks. Own focus management at the shell level; don't leave individual MFEs to figure it out. Automate what you can with axe-core, but it's not sufficient on its own — you need real assistive technology testing. And treat contrast as a build-time check, not a design-time suggestion, especially in a token system consumed by many teams.
Accessibility at scale isn't about perfection — it's about building systems that make the right thing easy and the wrong thing hard.