Skip to main content
🠘 Back to all blog posts

Mobile app accessibility: How to find and fix issues early

Your mobile app’s accessibility gaps are costing you users, trust, and budget. Here’s how to find and fix them before they turn into complaints, lawsuits, or fire drills.

- By Jeremy Gonzales, WAS - Updated Dec 16, 2025 Web Accessibility

Mobile app access is no longer optional. Whether you’re a university, a public agency, or a SaaS company shipping an HR or customer portal, people rely on your app to do essential tasks. When it doesn’t work for disabled users, that’s not just a UX bug — it’s an accessibility failure with legal and reputational consequences.

Regulations such as the Americans with Disabilities Act (ADA), Title II, Section 508, and other digital accessibility standards are being increasingly applied to mobile experiences, not just websites. The enforcement details will continue to evolve, but the practical question remains the same: can people with disabilities complete the same tasks as everyone else in your app?

Too often, the answer is “no” — and teams don’t realize it until someone complains. A campus emergency alert goes unread because a screen reader can’t parse the notification. A new hire cannot complete an onboarding app using only a keyboard. A resident trying to pay a utility bill in a city app can’t activate the “Submit payment” button. A banking customer can’t use voice control to navigate key workflows.

No one sets out to exclude those users. Most teams just ship features that work for touch, assume that means “accessible enough,” and only dig deeper after an internal escalation or legal threat.

The teams that avoid that pattern aren’t doing anything magical. They treat accessibility as part of normal delivery: clear requirements, mobile-friendly patterns in the design system, real-device testing with assistive technologies, and CI/CD pipelines that catch regressions before they ship.

Consider this, accessibility isn't just for disabled users, it helps elevate the experience for all users, regardless of how they interact with the app.

What's on the page

This guide walks through where mobile apps typically break accessibility and how to build a practical testing approach you can actually maintain — whether you’re building in-house, working with vendors, or doing a mix of both.

Where mobile apps typically fail accessibility

Users don’t care about your tech stack, your sprints, or how many tickets are in your backlog. They care whether they can actually complete tasks, such as reading an alert, submitting a form, watching a video, or navigating a menu. When those flows break for people with disabilities, it shows up as complaints, lost trust, and in some cases legal risk under frameworks like the ADA, Title II, and Section 508 — regardless of whether you ever see a formal “audit.”

The good (and slightly depressing) news is that mobile apps tend to fail in very predictable ways. Let’s walk through the failure modes we see over and over again: inaccessible notifications, broken keyboard and focus handling, fragile navigation patterns, weak color contrast and visual cues, media with missing alternatives, and forms that quietly lock people out.

Push notifications that screen readers can’t process

Your notification system works perfectly (for people who can see it). Campus emergencies, HR updates, assignment deadlines — all neatly popping up on screen. But what about a screen reader? Suddenly, your perfectly crafted alerts might as well be written in hieroglyphics.

To clarify, there’s a distinction between a custom notification and a platform push notification. Where developers get in trouble is when they try creating a custom alert/notification instead of using the platform APIs/SDKs. Platform-level notifications are typically highly accessible, which is why developers are encouraged to utilize them.

Most dev teams pat themselves on the back when notifications display and tap-through works. Meanwhile, screen readers are having an existential crisis trying to parse “New Alert” (seriously, that's about as helpful as a chocolate teapot). Without proper semantic markup, your carefully crafted messages turn into word salad.

The teams that excel in notification accessibility didn't discover a magical solution. They just stopped treating screen readers like an afterthought. They structure alert text properly (since screen readers can't read your mind), implement clear semantic relationships (because context matters), and use system-provided haptic patterns in accordance to their documented meanings. Most importantly, they test with real screen readers — because simulation only gets you so far.

Navigation systems that trap keyboard users

Your app's navigation works great with a touchscreen (congratulations). But keyboard users hit invisible walls everywhere. That slick hamburger menu? A keyboard maze with no exit. Those fancy dropdown filters? A trap where “Tab” key presses go to die.

Keyboard navigation failures often occur when focus handling, tab order, or semantics are incorrect. Think of a student orientation app where focus disappears after selecting a campus location, an HR portal that loops through fields without ever reaching “submit,” or a campus document library that skips the download button entirely.

Mobile keyboard navigation failures often appear in the most unexpected places. Take a student orientation app where keyboard focus vanishes into the void after selecting a campus location. Or that HR portal where keyboard users circle endlessly through form fields without ever reaching the submit button. Campus document libraries where keyboard navigation skips right over the download button (which might explain why your web accessibility complaints keep piling up).

Here are some others we frequently encounter:

  • Focus not taken to the newly revealed content, such as drawers or modals.
  • Focus trapped inside a modal because the close button isn't an interactive element.
  • Focus isn't contained within a region, like a modal, so users can navigate through the content behind the modal.

Focus indicators ghost users more often than bad Tinder dates. Keyboard users tab through your interface blind, never knowing which element they're about to activate. Even worse? Focus getting stuck in modal windows, forcing users to reload the entire app to escape (a special kind of digital prison).

Innovative development teams build keyboard navigation testing into their QA process. They verify that every interactive element is reachable, ensure focus indicators appear, and create escape routes from all interactive components. Better yet, they let keyboard users test their apps before release — because nothing beats real-world navigation attempts at exposing accessibility holes.

Color contrast and visual cues that exclude users

Remember that red error message on your payment form? Looks great against the white background (to you). But for users with color vision deficiencies, it might as well be invisible. And that subtle gray text explaining password requirements? Good luck reading that on a sunny day.

Your design system's color palette passed every automated check in existence. Yet your mobile forms still leave users guessing which fields are required. A tiny red asterisk doesn't cut it when users can't distinguish red from black. Those status indicators in your dashboard that rely purely on color? They're about as useful as a progress bar with no labels.

Mobile apps love their minimalist design trends. Borderless buttons that only highlight on hover. Form fields marked only by subtle bottom borders. Gorgeous on Dribbble, maybe — but a nightmare for users with low vision or cognitive disabilities trying to identify interactive elements.

Solid accessibility testing catches these issues before users do:

  • Strong color contrast ratios (at least 4.5:1 for standard text, 3:1 for large text).
  • Multiple visual indicators for important states.
  • Clear borders or backgrounds on interactive elements.
  • 3:1 color contrast ratios for user interface components and focus indicators.

Note that an app has to meet color contrast ratios in all states (except disabled): all user-preference modes, such as light and dark mode.

Media content is missing critical accessibility features

That orientation video explaining student resources? Beautiful production value, zero captions. Those lecture recordings in your learning platform? Crystal clear audio, no transcripts in sight. The virtual tour with your campus map's image-heavy navigation? A black hole for screen reader users.

Think of media accessibility like the special features on a DVD — except these aren't optional extras. Yet apps on mobile devices consistently treat them that way. Audio descriptions are often pushed to “later” (i.e., never). Transcripts wait for someone to complain. Alternative text for images? Most teams figure the filename should be enough (narrator: it wasn't). Even that slick onboarding tutorial your marketing team spent months perfecting forgot one tiny detail: not everyone can hear the voiceover.

The chaos multiplies in image-heavy interfaces. Campus maps float in space, with no text descriptions of building locations. Course catalogs bury critical details in unreachable infographics. Profile pictures leave screen readers stuck in an endless loop of announcing “image” like a broken robot. And those trendy decorative icons? They're getting read aloud as “button_1.png” — about as useful as a PDF of interpretive dance moves.

Innovative development teams have learned their lesson the hard way: technical standards and accessibility guidelines need to be integrated into the content pipeline, not the backlog. They use automated testing to catch missing elements early, because retrofitting features makes debugging Internet Explorer look like a vacation.

Videos don't go live without captions. Images ship with meaningful alt text baked in. Audio content arrives with transcripts attached. Most importantly, they've automated these checks to catch missing elements before release — because retrofitting accessibility features makes debugging Internet Explorer look like a vacation.

Your app's accessibility blind spots (pun intended)

Mobile apps don’t just fail on major issues like navigation or media alternatives. They fall apart in the tiny interactions nobody bothers to test on real devices: microscopic touch targets, cryptic error states, gesture-only navigation, fussy media controls, and forms that quietly break the moment the on-screen keyboard appears.

These are the blind spots that slip past design reviews and happy-path QA, only to show up later as complaints, low task completion rates, and accessibility tickets. Let’s walk through the worst offenders and what to look for when you test.

Those tiny tap targets in your UI? They're forcing users to play a frustrating game of precision finger gymnastics. The close button on your modal window sits exactly 8 pixels from the edge (because symmetry).

Mobile interfaces need breathing room. Users with tremors, large fingers, or motor control challenges need targets they can actually hit. Some straight talk about sizing:

  • Make interactive elements at least 24x24 pixels for AA compliance (44x44 px for AAA).
  • Space clickable elements at least 8 pixels apart (more if they're critical controls).
  • Create distinct touch zones for each action (that footer with five microscopic links? Fix it).
  • Add padding around interactive elements (because nobody's finger is a perfect 44-pixel square).

Teams building campus apps, HR portals, banking applications, and public-service apps often get this wrong. Take that student portal, where the “Log Out” and “Delete Account” buttons are nestled together like nervous teenagers at prom. Or that HR app where the “Submit” and “Cancel” buttons play hide and seek with your thumb. A misclick on these isn't just annoying — it creates legitimate accessibility barriers.

Innovative teams validate their touch targets in the real world. They test with users who have different hand sizes, motor control levels, and interaction styles. They test their UI under actual usage conditions (such as walking or riding a bus). Most importantly, they disregard any design trend that prioritizes aesthetics over usability.

Write error messages that don't make users cry

Your error messages read like rejection letters written by robots. “Invalid input detected in form field” (thanks for narrowing it down). “Error code 7B-399X occurred” (oh good, a mystery novel). “Please correct all errors to continue” (on a form with 47 fields and zero hints). Or “Please fill out this field.”, which is not very descriptive.

Mobile forms often fail to meet accessibility standards, resulting in error states. Red text announces validation failures to everyone except colorblind users. Error messages are displayed visually, but not announced by screen readers at all. Required field markers disappear when the form auto-scrolls to the error section.

Good error handling does three things:

  1. Tells users exactly what went wrong (not “validation error” but “password must include at least one number”).
  2. Shows where the problem is (with more than just color changes).
  3. Explains how to fix it (instead of making users guess).
  4. Takes users to the first field with an error.

Take that registration app where error messages flash red at the top of a 20-field form, forcing screen reader users to hunt for the problem field. Or that HR portal that marks errors with tiny red dots that vanish when the form scrolls. Even better: that financial aid calculator that announces “calculation error” without saying which number needs fixing.

Innovative teams treat error messages like actual communication, not compiler output. They write clear instructions based on user experience research, position error text next to the relevant fields, and ensure messages stay visible until users fix the problem. Most importantly, they test their error states with assistive technology — after all, an inaccessible error message is just another error.

Build navigation that works with actual hands (not just designer dreams)

Your navigation looks stunning in Figma presentations. Those precisely stacked menu items. That elegant hamburger menu animation. Often, we'll see menus/drawer components that expand, but focus isn't taken to the newly revealed content. That innovative gesture-based interface. Too bad it turns into a horror show the moment anyone tries to use it with one hand, or while walking, or with any motor impairment.

Mobile navigation fails in creative ways. Consider the campus events app, where the calendar swipe gesture conflicts with the system's back gesture. Or that student portal where the bottom navigation bar sits exactly where thumbs naturally rest (hello, accidental taps). Even better: that HR app where critical functions hide behind gesture combinations that would make a Street Fighter character jealous.

Here's what works in the real world:

  • Put primary actions where thumbs naturally land (not in screen corners).
  • Make navigation targets large enough for moving fingers (not just static testing).
  • Provide on-screen controls instead of relying on path-based gestures (some users can’t perform swiping motions).
  • Provide clear alternative paths (because not everyone can perform complex gestures).

There are a couple of relevant Success Criteria that relate to mobile devices and path-based gestures. The same task that uses multipoint or path-based gestures can be performed with a single pointer, eliminating the need for path-based gestures. 2.5.1 Pointer Gestures (Level A). Any functionality that can be triggered by the motion of the device (e.g., shaking) can be operated using on-screen controls. 2.5.4 Motion Actuation (Level A).

Campus apps often embrace experimental navigation patterns (even if it means disregarding best practices). Multi-level nested menus that require the precision of a brain surgeon. Gesture shortcuts that work perfectly in demos but fail on bumpy bus rides. Navigation bars disappear mid-scroll, leaving keyboard users stranded in website accessibility dead zones.

Innovative teams build navigation that works in actual life situations. They test interfaces while walking, using one hand and only their thumbs. They validate their patterns with users with varying mobility ranges. Most importantly, they remember that innovative doesn't always mean usable — sometimes a boring button beats a clever gesture.

Design media controls that everyone can actually use

Those video player controls in your training modules? Unusable by keyboard. Audio player timeline or neat circular volume control? Challenging for a user with tremors to grab. Media controls look slick in your portfolio but fail the moment real humans try to use them.

Web accessibility guidelines have been clear about this for years, but mobile media interfaces love their minimalist controls that fade away after three seconds. Great for aesthetics, but if implemented incorrectly, this can be problematic for accessibility. Or that video player where the timeline scrubber needs the precision of a neurosurgeon to grab.

Here's what makes media controls actually work:

  • Keep controls visible (or make them easily accessible).
  • Make hit areas generous (timeline scrubbing shouldn't require perfect aim).
  • Include chapter bookmarks for key areas of the video.
  • Include visual feedback that doesn't rely on color alone.

Consider providing keyboard shortcuts for everything (not just play/pause). But be careful. First, we want to avoid conflicting with pre-assigned screen reader commands, such as the ARROW keys. Additionally, single-letter keyboard shortcuts should be either remapped or disabled, and only active when a component is in focus. 2.1.4 Character Key Shortcuts (Level A).

Training, education, and onboarding apps, especially bots, often fail in this regard. Every accessibility issue compounds the next: Training videos with microscopic chapter markers. Audio players where the volume slider disappears on focus. Courseware platforms where the playback speed control requires clicking a target the size of an ant. Even worse: media players that trap keyboard focus, forcing users to reload the page to escape.

Innovative teams understand that media controls aren't decoration — they're essential interface elements. They build controls that work with keyboards, screen readers, and imprecise pointing devices. They test their players with real users in real situations. And they never sacrifice functionality for that clean, minimal look.

Create forms that don't exclude half your users

Forms are where accessibility dreams go to die. Your student registration form looks clean and simple until someone tries to use it with a screen reader. That benefits or financial aid calculator works perfectly, as long as you can see the tiny validation messages. Your course enrollment or onboarding wizard flows beautifully — assuming you can use a mouse.

Complex forms are especially guilty — from course registration and scholarship applications to HR onboarding and benefits enrollment.

Mobile forms find creative ways to break internet accessibility. Labels that look attached to fields but aren't, blocking equal access for anyone using assistive tech. Required field indicators that only use color. Validation messages that disappear faster than free pizza at a developer meetup. And everyone's favorite: form controls that rearrange themselves when the keyboard appears.

Here's what actually works:

  • Label every field properly (no, placeholder text isn't a label).
  • Mark required fields clearly (with more than just color).
  • Keep fields visible when the keyboard appears.
  • Ensure validation messages remain visible long enough to be read.
  • Provide detailed instructions about what data is expected from users.
  • When fields require data entered in a specific format, such as phone numbers, email addresses, or zip codes, provide instructions/examples of how to structure the input.
  • Fields asking for user information have valid autocomplete attributes.
  • Don't disable the Submit button. Allow users to enter all the information into the form before running field validation. Then throw errors if there are invalid entries.

Campus forms are especially guilty. Take that course registration system where the submit button hides behind the virtual keyboard. Or that scholarship application where error messages float randomly around the page like lost balloons. Even better: the contact form, where the reset button sits right next to the submit button (because who doesn't love losing all their data by accident?).

Innovative teams treat forms like the critical interfaces they are. They test against strict accessibility requirements and compliance standards, including screen readers, keyboard-only navigation, assistive technology, and rage clicks, all of which are essential because forms should collect data without collecting curse words from frustrated users.

Build accessibility into your process (not onto it)

Most teams still bolt accessibility testing onto the end of the development process. On mobile, that’s exactly how you end up shipping flows that break for screen reader and keyboard users, then spend months trying to patch them across iOS versions, devices, and releases.

The teams that avoid this trap weave accessibility into every step of their process, including design reviews, backlog refinement, code review, and deployment checks, whether they’re building in-house or working with third-party vendors.

“Test early and test often” only matters if accessibility is part of what you’re testing. A missing accessible name or broken focus order is a five-minute fix in code review and a multi-week headache once it’s in production and tied to analytics, tracking, and translations.

You already have a checklist for code review: tests passing, performance impact acceptable, and no obvious security issues. Accessibility deserves the same treatment, not a vague “we’ll get to it later” column in your backlog.

Here's how teams make accessibility part of their review culture:

  • Add accessibility linters to the IDE and treat warnings as real defects, not background noise.
  • Bake accessibility checks into pull request templates instead of hiding them in a separate checklist nobody opens.
  • Block merges when automated accessibility checks fail (even when you’re “just fixing a typo”).
  • Make quick screen reader and keyboard/assistive input checks as routine as verifying the page loads.

Do that consistently, and accessibility stops being a special project you have to “add on” later. It becomes muscle memory — just how your team builds mobile apps.

Make your CI/CD pipeline catch accessibility fails

Your deployment pipeline runs unit tests, security scans, and performance checks. But accessibility testing? Not even a checkbox in your QA process. Your pipeline happily ships code that screen reader users can’t make sense of and keyboard users can’t reliably navigate.

Automated accessibility checks don't need fancy tools or complex setups. Modern testing frameworks already include everything you need:

  • Siteimprove automated checks run your test suite (catching WCAG violations before they hit staging).
  • ESLint rules enforcing proper ARIA attributes.
  • Component tests that check keyboard navigation.

Building these checks into your pipeline means developers get instant feedback when they break accessibility features — no more waiting for users to report issues or auditors to flag problems. Your CI/CD becomes your first line of defense against accessibility fails.

Write acceptance criteria that include everyone

Acceptance criteria are concise, checkable statements that outline what “done” means for a user story. They’re written in plain language, so anyone—designer, tester, PM, or stakeholder—can verify whether the feature works in the real world, not just in code.

Your user stories cover the happy path. “As a user, I want to upload a file.” But which user? The one who sees your fancy upload button, or the one whose screen reader announces it as “clickable element 47”?

Most acceptance criteria read like a checklist for people with perfect vision, steady hands, and high-end devices. Upload button visible? Check. Drag-and-drop working? Check. Screen reader support? Crickets.

Here's what inclusive acceptance criteria look like:

  • Navigation is supported by keyboard controls and can be activated without a mouse (no trapped focus).
  • Error messages read correctly by screen readers (in the proper order).
  • Touch targets meet size requirements (no microscopic buttons).
  • Interactive elements have clear focus states (visible to everyone).

Stop treating accessibility as a bonus point in your acceptance criteria. Make it a blocker, just like security and performance. Because a feature that only works for some users isn't fully implemented, it's only partially done.

Turn your QA process into an accessibility checkpoint

Most QA plans treat best practices and automated testing like New Year's resolutions — something you'll eventually get to, but not necessarily in the near future. A checkbox at the bottom of a test plan that nobody reads. The last item in your definition of done that somehow never gets done.

Here's a wild idea: What if accessibility testing were just testing? Your QA team already clicks buttons, fills forms, and navigates screens. They're halfway to accessibility testing without even knowing it. Add keyboard navigation and screen readers to those same test paths, and suddenly you're catching issues before they become expensive problems.

Innovative QA teams already do this:

  • Run every user flow with keyboard navigation (because not everyone has a mouse).
  • Check content with screen readers (because alt text shouldn't read like a novel).
  • Validate touch targets on actual devices (not just Chrome's device simulator).
  • Use real assistive tech (because emulators lie).

Fixing accessibility bugs during QA is certainly more cost-effective than remediation after the fact. Finding those same bugs after users start complaining? That's another story.

Where mobile accessibility is heading (and how to get there first)

Augmented reality navigation that screen readers can’t process. Virtual reality training that causes people to feel motion-sick. AI-generated alt text that reads like a cryptic poem. Every new interaction model brings fresh ways to exclude users if you haven’t built accessibility into your foundations.

Teams pushing the accessibility envelope aren’t waiting for regulations or lawsuits to catch up with technology. They’re already:

  • Training and configuring AI models with proper semantic structure and ARIA patterns, instead of trusting generic “describe this image” output.
  • Favoring predictable, reusable interaction patterns over one-off cleverness that breaks assistive tech.
  • Designing gesture controls that can be customized, simplified, or replaced with on-screen alternatives for users who can’t perform complex motions.
  • Testing with next-generation assistive devices (updated screen readers, switch controls, voice input, alternative pointers) instead of only using emulators.

Mobile accessibility won’t remain a static WCAG checklist for long. Voice interfaces need the same level of error handling and feedback as visual ones. Augmented reality overlays need an accessible wayfinding and clear landmarks. Virtual environments require navigation systems that are accessible to individuals who struggle with pointing, dragging, or swiping in 3D space.

The way you get ahead of that future isn’t by chasing every shiny new framework. It’s by making accessibility testing as non-negotiable as security testing: wired into your design decisions, CI/CD, vendor selection, and QA. You wouldn’t ship an app without security testing. Don’t ship it without testing for accessibility — and if you’re hiring an app development company to build it, hold them to the same standard.

Jeremy Gonzales, WAS

Jeremy Gonzales, WAS

Accessibility professional with over 9 years of identifying, prioritizing, and remediating digital accessibility barriers. Has an expert level understanding of WCAG 2.2 guidelines and testing websites and apps using assistive technologies such as JAWS, NVDA, VoiceOver and TalkBack. Experience speaking with all levels of an organization about the necessity of creating inclusive content. Open to Accessibility Consultant and Accessibility Engineering roles.