What are the process or idealogies that you used to leverage AI in the Quality world? Could you share some insights so we could takeaway some key insights
If AI starts finding bugs in our code, should we log it as a defect or call it “self-improvement”?
If quality is a key objectivity, where does trust sit (or where it should sit) on that spectrum?
How quality testing can be maintained in AI ?
What architectural pattern would you use to design a scalable, fault-tolerant, and highly available microservice system?
In a world where AI can predict performance bottlenecks do human engineers become optimization experts, or risk forecasters?
Is Different AI suggested for different testing models/Framework ?
Yes, each model is optimised for something specific. It’s a good idea to do some research on each of the models and what they do best. For example, GPT-4o is a multimodal model and can support both text and images. Claude 3.7 Sonnet, for example, is an advanced model that requires structured reasoning, but can deliver on deeper tasks. It’s good at bug fixes and advanced optimisation. Gemini 2.0 Flash on the other hand is good at generating small reusable pieces of code due to its high-speed, multimodal model that’s been optimised for real-time inputs and agentic reasoning, making it a good fit for Agent Mode. You can read more here: GitHub Copilot - Methods, modes, and Models: Which one is the best? - DEV Community
I think both. We become directors and overseers of our AI tooling, as well as looking at the AI’s limitations and risks and doing our best to mitigate those.