Exploratory Testing with AI | Testμ 2025

Join Gil Zilberfeld, CTO at TestinGil, as he breaks down the core components of exploratory testing and demonstrates how AI can enhance every stage, from charter creation and test case suggestions to prioritization and bug reporting.

Discover how AI-driven tools can streamline documentation, boost test efficiency, and help manage workloads more effectively.

:spiral_calendar: Don’t miss out, book your free spot now

Since AI often learns from existing patterns, how do we prevent it from reinforcing blind spots?

Exploratory testing relies heavily on human intuition and curiosity. How can AI truly augment this process without limiting creativity?

How AI will change exploratory testing ?

How do you balance using AI for exploratory testing without losing the tester’s own intuition?

If AI learns from existing systems and behaviors, is there a danger it will only explore the expected and miss the unexpected?

AI works on probabilities and predictions. What if an AI system decides not to explore an edge case that later turns into a critical defect in production?

How can AI enhance exploratory testing by not just detecting issues, but also predicting user behavior and uncovering hidden risks at scale?

How would you design an AI-driven exploratory testing framework that not only discovers unknown defects dynamically but also learns from previous sessions to continuously improve test coverage and risk detection?

Exploration is often playful and experimental. Can AI ever replicate this playfulness that leads human to discover the hidden issues?

How do you determine which exploratory testing tasks are safe to delegate to AI without losing context or insight?

With AI generating test cases (for both API and UI exploratory testing), what metrics/criteria can test engineers use to evaluate the quality, relevance, and coverage of these AI-generated suggestions?

In exploratory testing, how will AI assistance streamline the process of documenting findings, sessions, and bug reports while maintaining accuracy, clarity, and consistency?

What are your thoughts on how to leverage AI for with regard to exploratory testing outside of using prompts to help identify the test coverage/ charters?

What percent of your effort would you say is utilized confirming that AI is producing correct results (and not hallucinating)?

Is there something similar available for mobile UI testing?

What’s the ethical considerations and potential biases when using AI to augment or guide exploratory testing efforts? How would you mitigate those risks and enable fair and representative test coverage?

Which agent/tool used for creating this cases? is it chatGPT only?