Advanced Playwright with AI | Testμ 2025!

Join Andrew Knight, also known as “Pandy,” as he dives into advanced Playwright techniques for web testing and demonstrates how AI can accelerate test automation.

Explore page object abstraction, behavior separation, test data management, and AI-powered tools to build robust, scalable, and faster web tests.

:spiral_calendar: Don’t miss out, book your free spot now

What’s the most useful way AI can boost Playwright testing beyond what we already do manually?

How you would approach building a “self-healing” Playwright test suite using AI (where tests automatically adapt to minor UI changes without requiring manual intervention, while still maintaining high reliability and preventing unintended behavior)?

POM is widely known, but not always a perfect fit for modern web development. Does this workshop introduce alternatives such as the Screenplay Pattern, and what are the pros and cons versus POM?

The Page Object Model is a classic pattern, but it can be challenging for modern, component-based web apps. Does this workshop explore alternative patterns like the Screenplay Pattern, and what are the trade-offs compared to traditional POM?

What are some use cases where Playwright’s UI mode or code generator can accelerate test creation?

How does this workshop handle the shortcomings of POM in modern app, does it explore options like the Screenplay Pattern, and what are the advantages and disadvantages of each?

Will AI-powered test case generation actually improve coverage, or just create bloated, meaningless test suites?

How can Playwright’s network interception capabilities, plus AI-powered anomaly detection, be used to proactively identify potential performance bottlenecks or security vulnerabilities within web apps during automated testing?

What is your recommended strategy for managing test data when running Playwright tests in parallel? How can we ensure tests are fully isolated and don’t corrupt each other’s data?

What are the technical considerations and potential limitations when using AI-driven Playwright tests, such as Auto Playwright, for complex, multi-page workflows involving dynamic data and conditional logic?

How can you integrate Playwright with test data management tools like Faker.js or Testcontainers?

Playwright is great for speed and coverage but how do you manage test flakiness at scale?

What are the best practices for writing maintainable Playwright scripts when incorporating AI-based test generation?

How can Playwright and AI be used to implement advanced performance testing scenarios, such as emulating various network conditions, simulating heavy user load, and analyzing the impact of these factors?

How can AI-driven visual regression testing in Playwright effectively differentiate between legitimate UI changes and rendering anomalies, and what strategies can be used to minimize false positives and maintain a more robust baseline?

Can AI models be trained on Playwright test results to predict failures or optimize test coverage?

How do debug failing tests in Playwright?

If AI starts writing Playwright tests on its own, should we promote it to QA lead or ask it to write tests for itself first?

Is it possible to generate the API Test from the UI flow Test using Playwright?