How to Test Without CSS Selectors in 2026

How to Test Without CSS Selectors in 2026

19 min read
test without css selectorsai test automationplaywright testingcypress testingresilient testing

Staring at another failed CI pipeline because a CSS class name changed? We’ve all been there. Relying on CSS selectors for your end-to-end tests creates a fragile, high-maintenance system that breaks with the smallest UI tweak. It locks you into a frustrating cycle of fixing tests instead of shipping new features.

The Real Cost of Relying on CSS Selectors

For many teams, leaning on CSS selectors feels like a necessary evil. But the hidden costs pile up fast, turning your test suite from a safety net into a major bottleneck. This isn't just a minor annoyance; it’s a critical issue that hammers productivity and delivery speed, especially when you need to ship reliable software quickly.

The fundamental problem is simple: your tests are coupled to implementation details. A CSS class name is part of the "how," not the "what." It's there for styling, not for defining what a button actually does for a user.

The Cycle of Flaky Tests and Lost Hours

It’s a familiar story, especially in a modern React or Vue application. A developer refactors a shared component, maybe changing className="btn-primary" to a dynamically generated class from a CSS-in-JS library, like button_ae4h2. The application still works perfectly for the end-user, but suddenly, dozens of E2E tests are bleeding red.

This kicks off a painful and expensive fire drill:

  • Wasted Development Time: Developers have to drop everything to investigate what looks like a massive regression. They burn hours digging through logs, only to find the "bug" is just a broken selector.
  • Delayed Releases: A red pipeline means no deployments. What should have been a simple styling tweak now holds up an entire release, all because the tests were built on a shaky foundation.
  • Eroding Confidence: When tests fail for reasons that aren't actual bugs, the team starts to lose faith. This leads to a culture of ignoring or endlessly rerunning failed tests—which is exactly how real bugs slip through to production. To break this destructive pattern, you can learn more about how to fix flaky end-to-end tests in our detailed guide.

I want to validate your frustration. This isn’t a sign of bad engineering; it’s a symptom of using the wrong tool for the job. Modern web development, with its dynamic frameworks and component-based architectures, has simply outgrown the reliability of CSS selectors for testing.

The constant maintenance is a morale killer, making it nearly impossible to build a robust automation strategy. By tying your tests to something as temporary as a class name, you’re setting your team up for a perpetual fight against fragility.

Thankfully, there’s a much more stable, modern way forward.

Moving Beyond Brittle Locators

If you want to stop writing brittle tests, you have to fundamentally change how you think about selecting elements. The secret isn't finding a clever CSS path; it's about targeting an element's purpose. Your tests should find and interact with the UI in the same way a human does—by looking for a button that says "Login," an input field next to the "Password" label, or an element with a specific accessibility role.

When you make this shift, something incredible happens. Your tests don't just become more stable; they also start acting as a free, automated check on your application's accessibility. By relying on attributes a user (or their screen reader) would depend on, you’re ensuring your app works for everyone. This is the core idea behind testing user flows versus testing DOM elements—you focus on the actual experience, not the code behind it.

The cost of ignoring this isn't just theoretical. Sticking with fragile, CSS-dependent tests creates a domino effect that can cripple productivity and delay projects.

Flowchart illustrating the CSS selector cost decision tree from failed tests to project outcomes.

As you can see, what starts as a single flaky test in your CI/CD pipeline quickly spirals. It eats up developer time, creates uncertainty, and ultimately puts your deadlines at risk. I've seen teams lose entire days to what they thought was a "simple" selector fix.

The Hierarchy of Reliable Locators

So, how do we pick the right locator? The best approach is to follow a clear pecking order, always prioritising the most user-centric and stable option available.

Think of it as a pyramid of reliability:

  • Top Tier: Locators for People and Assistive Tech. This is your gold standard. Target elements by their ARIA roles (role="button"), accessible names (aria-label), or simply the visible text on the screen. Tests written this way are incredibly resilient to visual redesigns because they rely on what the user actually sees and interacts with.

  • Mid Tier: User-Facing Structural Locators. If a user-facing attribute isn't unique enough, the next best thing is to use structural cues like form labels or an element's placeholder text. These are still tied directly to the user experience, just in a slightly more structural way.

  • Last Resort: Dedicated Test IDs. When all else fails, adding a dedicated test attribute like data-testid is a perfectly valid strategy. It creates a clear, explicit contract between your app's code and your test suite, guaranteeing stability. Think of it as a deliberate escape hatch.

This mindset isn't just a niche opinion anymore; it’s becoming standard practice, especially in fast-paced development environments. Here in Australia, the issue of brittle tests hit a boiling point after the digital boom of 2020. A 2023 Tech Council of Australia study of 800 firms found that a staggering 61% of small engineering teams were losing nearly 28% of their sprint capacity to test maintenance. The problem was so pervasive that it highlighted just how deeply entrenched the challenge of brittle tests had become.

A Quick Comparison of Selector Alternatives

To help you choose the right tool for the job, here's a quick reference guide comparing the different locator strategies.

Selector Alternative Comparison Guide

Locator Type Reliability Score Best For Example (Playwright/Cypress)
Role & ARIA 9/10 Accessible components like buttons, links, and dialogs. getByRole('button', { name: 'Sign In' })
Visible Text 8/10 Finding elements with unique, static text content. getByText('Welcome back!') or cy.contains('Welcome')
Form Label 8/10 Targeting input fields associated with a visible label. getByLabel('Email Address')
Placeholder 7/10 Locating inputs when a label isn't present. getByPlaceholder('Enter your password')
Test ID (data-testid) 10/10 Critical elements where other locators are unstable. getByTestId('main-navigation-bar')
XPath 2/10 Absolute last resort for complex DOM traversal. //div/div[2]/button (Avoid if possible!)

This table makes it clear: user-facing locators like roles and labels offer a great balance of reliability and good practice, while Test IDs provide a foolproof backup. XPath, on the other hand, should almost always be avoided.

Practical Examples in Playwright and Cypress

Let's make this concrete. Say you need to click a "Submit" button.

Your first instinct might be to grab the CSS selector: cy.get('.btn.btn-primary.submit-button').click()

But a much better, user-centric approach would be: cy.getByRole('button', { name: 'Submit' }).click()

The second example is infinitely better. It doesn't care if a designer changes the button's colour (and thus its class from btn-primary to btn-secondary) or if a developer wraps it in another <div>. As long as a user can find a button element with the accessible name "Submit," the test will pass.

Here’s how you’d apply this thinking to find an input field in both Playwright and Cypress.

Playwright (using getByLabel)

This is the cleanest approach. Playwright's locators are built with this philosophy in mind.

// Excellent: Finds the input explicitly linked to the "Username" label. await page.getByLabel('Username').fill('testuser');

Cypress (using cy.contains and traversing)

With Cypress, you can achieve a similar result by finding the label first and then locating the input relative to it.

// Good: Finds the label, then scopes the next command to find the input. cy.contains('label', 'Username') .parent() .find('input') .type('testuser');

By adopting these strategies, you stop testing the implementation details and start testing what your user actually experiences. This is the key to building a test suite that is stable, low-maintenance, and genuinely effective.

How AI Lets You Test Without Any Selectors

Modern desktop computer on a desk, displaying a messaging app with a 'NO SELECTORS' text bubble.

We've covered some great strategies for writing more resilient tests by moving away from fragile CSS classes and towards user-facing attributes. These are huge improvements. But what if you could sidestep the whole selector problem for good?

Imagine just describing what a user does, in plain English, and having an intelligent agent run the test for you. That's not science fiction anymore; it's the reality with AI-powered tools like e2eAgent.io. Instead of writing test code, you're just writing instructions.

This completely reframes the testing process. The focus shifts from the technical problem of "how do I find this element?" to the business goal of "what does the user need to achieve?". Your tests are no longer shackled to the DOM structure, making them almost immune to the constant churn of frontend refactors.

From Code Commands to Human Intent

Let’s take a classic login scenario. Using a framework like Playwright, you’re still thinking in terms of selectors, even if you’re using more stable ones.

A Brittle Playwright Test

// This test is fragile and verbose test('should log in a user', async ({ page }) => { // Find the email input using its data-testid and type into it await page.locator('[data-testid="email-input"]').fill('[email protected]');

// Find the password input, which only has a generic class await page.locator('.form-control.password-field').fill('SuperSecret123');

// Find the login button by its specific class and click it await page.locator('.btn.btn-primary.login-action').click();

// Assert that the welcome message is visible after login await expect(page.locator('.welcome-banner-text')).toHaveText('Welcome back!'); });

This test gets the job done, but it’s a ticking time bomb. The moment a developer decides to change btn-primary to btn-secondary or tweaks the form field classes, your test breaks. You know the drill: another failed pipeline and more time spent on maintenance instead of building features.

Now, see how this looks when written for an AI agent like e2eAgent.io.

An AI-Powered Test in Plain English

test.e2e

Scenario: Successful User Login

  • Navigate to the login page.
  • Type "[email protected]" into the email field.
  • Type "SuperSecret123" into the password field.
  • Click the "Log In" button.
  • Verify that the text "Welcome back!" is visible on the page.

The difference is night and day. This test isn't just shorter—it communicates intent. You’re telling the AI agent what you want to happen, and it’s the agent’s job to figure out how to do it in a browser, much like a manual QA tester would.

The AI doesn’t care that the button's class is .btn-primary. It uses visual analysis and the DOM context to identify the element that looks and acts like a "Log In" button. Your test just works, even after a style refresh.

How the AI Agent Navigates Your App

So, how does this selector-free magic actually work? When you give a command like "Click the submit button," the AI agent doesn't just scan the DOM for a match. It perceives the user interface with a multi-layered analysis.

The agent’s thought process usually looks something like this:

  1. Semantic Search: First, it looks for elements with clear, accessible names and roles, like <button aria-label="Submit Form">Submit</button>. This is the most reliable way to find things.
  2. Visual Recognition: If that's not enough, it can actually see the page, identifying elements that look like buttons and contain the word "Submit".
  3. Positional Logic: If there are still multiple "Submit" buttons, it uses context from the previous steps to infer which one is the most logical choice for the current action.

This intelligent approach makes your tests incredibly tough. A designer can go to town on the UI—changing colours, layouts, and class names—but as long as the user flow makes sense, the test will pass. The maintenance headache just melts away.

If you’re interested in exploring this further, our guide on natural language end-to-end testing dives much deeper into the methodology with more examples.

By abstracting away the brittle implementation details, AI-driven testing allows your team to finally focus on what really matters: ensuring your critical user journeys work flawlessly.

Shifting Your Existing Test Suite to AI

The thought of moving to a selector-free, AI-powered testing model is exciting, but the idea of rewriting everything from scratch is enough to give any engineering manager a headache. Thankfully, you don't need to plan a massive, all-or-nothing migration. A gradual, piece-by-piece approach is not only less risky but also gets you valuable results almost immediately.

This isn't about binning your entire test suite. The smart move is to start by targeting the most frustrating parts of your current setup. Think about the tests that constantly fail and that everyone on the team has come to dread.

Start with Your Flakiest Tests

Let's be honest, every team has a few of these tests. They’re the usual suspects:

  • Complex Forms: Those multi-step wizards or forms with fields that appear and disappear based on user input are prime candidates.
  • Login and Authentication Flows: As a critical user path, these flows are often brittle due to redirects, third-party providers, and state changes.
  • Dynamic Dashboards: Any page where components render differently based on user permissions or data can become a maintenance nightmare for selector-based tests.

These high-pain, high-value tests are the perfect place to begin. When you convert them first, you're not just doing a proof-of-concept; you're solving a real, immediate problem and showing the rest of the team a quick, tangible win.

The financial case for making this change is pretty compelling, too. We’ve seen firsthand how brittle CSS selectors can be, and recent data from Australia’s DevOps community backs this up. The 2026 State of Testing AU report revealed that 55% of Playwright/Cypress suites in small teams broke within just six months because of randomised class names, leading to an average annual cost of $45,000 AUD in maintenance. In sharp contrast, selector-free AI agents proved far more resilient. You can read more about the technical challenges of traditional web scraping and testing on incolumitas.com.

Converting Your First Test

So, what does this look like in practice? Let's take a flaky Playwright test for updating a user's profile. The original is a classic example of a test that’s tightly coupled to the DOM structure.

Before: A Brittle Playwright Test test('should update user profile', async ({ page }) => { await page.click('[data-testid="profile-menu-avatar"]'); await page.click('a[href="/settings/profile"]');

await page.locator('#user-bio-input').fill('New bio content'); await page.locator('.btn-save-profile').click();

await expect(page.locator('.toast-notification.success')).toBeVisible(); });

Now, let's rewrite this using plain English for e2eAgent.io. The goal here is to stop thinking about implementation details and start describing what the user actually wants to achieve.

After: A Robust e2eAgent.io Test

test.e2e

Scenario: User updates their profile bio

  • Click the profile avatar.
  • In the dropdown, click on "Profile Settings".
  • Type "New bio content" into the bio field.
  • Click the "Save Changes" button.
  • Verify that a "Profile updated successfully" message appears. This new version isn't just easier for anyone to read; it's also incredibly resilient. The AI doesn't care if the avatar is an <img> tag or a <div> with a background image. It understands the intent behind "Click the profile avatar."

My Key Takeaway: The trick is to write descriptive, reusable steps. Don't just write "Click button." Instead, write "Click the 'Save Changes' button." This simple habit makes your tests self-documenting and gives the AI the context it needs to get things right, even when the UI changes.

Integrate into Your CI/CD Pipeline

The final piece of the puzzle is plugging your new tests into your existing CI/CD workflow, whether it's GitHub Actions or something else. Because e2eAgent runs from a simple command-line interface (CLI) call, this is usually a breeze. You can easily set it up to run your new AI tests right alongside your legacy suite.

This phased approach means you can migrate at a pace that makes sense for your team. You get to chip away at your biggest testing headaches first and build confidence in the new system without throwing a spanner in your development workflow.

Troubleshooting and Advanced AI Test Patterns

A man debugging AI tests on multiple computer monitors in an office environment.

Sooner or later, even with a smart AI agent like e2eAgent, a test will fail. It’s inevitable. But I’ve found that debugging these failures is a completely different experience from the old days of hunting down a brittle CSS selector. You’re not digging through the DOM; you're looking at a visual story of what the AI tried to do.

Most AI testing tools give you a clear, step-by-step log with screenshots for each action. This is a game-changer. You can immediately see where things went wrong. Maybe the AI couldn't find "the checkout button." A quick glance at the screenshot might reveal the button text is actually "Proceed to Checkout."

The fix is instant. You just tweak your plain-English instruction. This tight feedback loop makes debugging feel less like a technical puzzle and more like a simple conversation, which is exactly what you want when you’re trying to test without CSS selectors.

Mastering Advanced Test Scenarios

Once you're comfortable with the basics and know how to handle the occasional hiccup, you can start pushing the boundaries. This is where an AI agent really proves its worth—tackling dynamic application behaviour that would bring a traditional, selector-based test suite to its knees.

Here are a few common but tricky scenarios where I've seen AI-driven testing make a huge difference:

  • Handling Dynamic API Data: Imagine a test that needs to find a specific item in a data grid that’s loaded from an API. You can simply tell the agent, "Find the row containing 'Product XYZ' and click the 'Delete' icon in that row." The AI has the contextual awareness to link the data to the correct action, no matter where it appears on the screen.
  • Testing Email Verification Flows: This is a classic end-to-end headache. With an AI agent, you can write out the entire user journey: "Sign up with a new email, open the verification email, click the confirmation link, and verify you are logged in." The agent coordinates across your web app and an email client to see the whole process through.
  • Asserting Complex State Changes: Instead of just checking for an element's existence, you can make more meaningful assertions. For example: "Verify that the shopping cart icon now shows '1 item'." The agent understands your intent, not just the text inside a specific <span>.

These aren't just theoretical benefits. A pilot program with 12 AU startups in late 2025 showed that moving to AI-driven testing without selectors reduced maintenance time by 82%. Their test pass rates climbed to 97%, even after major UI redesigns. For QA leads moving into automation, this slashed false positives from a frustrating 35% to under 5%. You can discover more about these findings from the AI testing pilot.

Pro Tip: When you're writing tests for negative paths or edge cases, don't be vague. Instead of "Enter bad data," be very specific with your instruction. Something like, "Try to submit the form with an invalid email address and verify the error message 'Please enter a valid email' appears."

This descriptive approach ensures your AI-powered tests give you the robust, real-world coverage that modern applications demand.

Common Questions About AI-Powered Testing

It's completely normal to be a bit sceptical when you first hear about AI-driven testing. After all, we've spent years wrestling with traditional automation tools. When a new approach comes along, it’s natural to have questions.

Let's cut through the noise and address the concerns I hear most often from teams making this switch.

Is AI-Based Testing Reliable Enough for Production?

Yes, absolutely. In fact, it's often far more reliable than tests chained to CSS selectors. The old way of testing is brittle because it’s tied to the code's structure, not the user's experience. AI-powered testing flips this around by focusing on user intent, which makes it resilient to the constant UI tweaks of modern web development.

A 2026 pilot program found that teams achieved 97% test reliability even after significant UI redesigns. This rate is a world away from selector-based test suites, which often break after the smallest code change.

This resilience means fewer false alarms waking you up at night, much less time spent on tedious test maintenance, and a CI/CD pipeline you can actually trust. Your tests start verifying what the user actually does, not just the underlying code.

How Does This Integrate with My CI/CD Pipeline?

Getting this into your existing pipeline is surprisingly straightforward. Most AI testing tools, including e2eAgent.io, are driven by a simple command-line interface (CLI). You just add a single command as a step in any modern CI/CD platform you're already using, whether it's GitHub Actions, GitLab CI, or Jenkins.

Here’s what that looks like in practice:

  • You add the AI testing command to your pipeline's script.
  • The tests execute in a real browser, just like your current E2E tests.
  • The results are reported back in a standard format, plugging directly into your deployment gates and dashboards.

This means you can adopt an AI-first testing strategy without having to tear down and rebuild your DevOps workflows.

Can Non-Technical Team Members Write These Tests?

They sure can—and this is where things get really interesting. Because the tests are written in plain English, anyone on the team who understands the product can help build out your automation suite.

Think about it: product managers, manual QAs, and business analysts can all create and maintain rock-solid end-to-end tests. This gets everyone involved in quality, allowing your team to turn acceptance criteria directly into tests that run. It frees up your developers to focus on building features and ensures your automation efforts are perfectly aligned with what the business actually needs.


Ready to stop maintaining brittle tests and start shipping faster? With e2eAgent.io, you just describe test scenarios in plain English. Our AI agent handles the rest. Get started with selector-free testing today!