How do you balance speed and accuracy when scaling AI testing across large, complex systems?
How should organizations measure the effectiveness of QA beyond defect detection, e.g., business impact, customer satisfaction, or release confidence?
What is confidence when making AI-driven testing decision?
What skill sets should senior testers develop to confidently supervise AI-driven testing?
How do you balance the speed of adopting autonomous test creation with the team’s trust in its reliability?
How can QA leaders foster a culture of innovation that balances traditional testing rigor with AI/automation experimentation?
How do you handle false positives or false negatives flagged by AI?
Given the potential for bias in AI training data, what ongoing mechanisms and ethical frameworks are key toward making sure that AI-driven testing decisions are efficient, fair and equitable across diverse user groups and contexts?
How can AI be used to provide deeper insights into software quality, such as predicting potential user experience issues or identifying areas for proactive improvement?
What metrics have proven most effective in measuring stakeholder confidence in AI testing outputs?
How can we validate that our tests are effectively verifying system behavior?
How do you handle explainability when AI marks a test as redundant, but developers still rely on it for debugging?
How do you explain AI-generated test prioritization to stakeholders who expect transparency and control?
At what percent of AI is being adopted in your companies?
What skill gaps do you see emerging for QA engineers in the next 3–5 years, and how should teams address them?
How would we ensure or rely on AI even like Open AI, Gemini when it comes to security, data, or even when we provide screenshots or give source code access. Its is learning but it might be breach into into our solution or software.
How do you see the role of senior testers evolving in organizations that heavily adopt AI testing?
How do you convince stakeholders to trust AI’s decision on flaky test detection over manual judgment?
When AI is creating tests when it writes the code is seems like it could have the bias that exists when developers write their own test cases? Doesn’t it create situation where issues missed would still be overlooked cause it needs oversight?