If AI starts testing AI, who tests the tester? And what happens if it decides my code is ‘too human’?
What happens if AI-based test automation becomes too rigid or inflexible to adapt to app changes?
How do we track when we automate AI tasks, if the AI is operating within the boundaries or strayed away etc?
How do we keep AI testing models updated with rapidly changing requirements?
How do we handle situations where AI introduces flaky or redundant tests at scale?
What are the dangers of relying too much on AI to decide test priorities?
How do we encourage, inspire, or even teach critical thinking? I’ve found this skill to be in decline for a long time?
If AI in testing is inevitable, what guardrails,ethical, practical, or cultural, should teams put in place before adopting it?
How do organizations balance the pressure to adopt AI in testing with the risk of creating unsafe dependencies?
What strategies can be implemented to address the skill gap within QA teams and prepare them for the shift in mindset required to work effectively with AI in testing?
What could go wrong in testing AI itself? Don’t we forget about that perspective on testing? There is much talk about how we can use AI, though how are we going to test all the new software that has AI in it?
What mechanisms should be implemented to safeguard the privacy and security of sensitive data used to train AI models for testing (especially when considering model inversion and data extraction risks)?
Have you guys discovered the best LLM that isn’t too restrictive and also not too biased?
Testing has evolved from traditional manual to automation and now AI-driven approaches. In this changing era, what are the core principles a tester must build strongly to stay relevant and provide meaningful analysis
Could AI reinforce existing gaps in test coverage rather than improving it?
How can lack of transparency in AI models erode tester trust in results?
Do you think we’re ready to trust AI with production-level test sign-offs?
How can AI errors propagate across CI/CD pipelines if not properly monitored?
What risks are there when training AI on sensitive production data for test creation?
How do AI model updates or version changes introduce unexpected test failures?