Can automation alone keep pace with the complexity of testing AI, or do we need new paradigms?
But how would this rethinking testing skills overcome the crisis of AI overtaking testing? Could you brief this a bit for everyone
What new metrics or benchmarks are needed to measure quality in AI-driven software?
If AI can generate and execute tests, what unique value does a tester-in-the-loop bring that a generic human-in-the-loop might not?