Experts Chris, Kiran, and others will discuss quality engineering’s role in AI development.
Learn about quality assurance, ethical considerations, and testing frameworks to manage AI risks and gain insights into best practices and challenges in AI technologies.
Not registered yet? Don’t miss out—secure your free tickets now:
Already registered? Share your questions in the thread below
Here are some unanswered questions that were asked in the session:
With the increasing integration of AI in quality engineering, what strategies should be adopted to ensure that AI-driven testing maintains high standards of reliability and minimizes the risk of introducing new types of errors or biases?
A query - How safe is to put your user story description and detail on generative AI tools, to generate the test scenarios and cases ? How shall we ensure the content we search remains secure and our product/project related secrets are not revealed ?
Love what Subba is talking about! I think it’s crucial to stop for just a second and assess these aspects of testing AI as prio one! How can this put into general practice?
In the dynamic and evolving nature of AI models, how we as quality engineering team can keep up with the scalability challenges in testing AI systems that continuously learn and adapt?
How do we adapt traditional validation and verification processes when dealing with AI systems, especially when the AI’s decision-making is not entirely transparent or deterministic?
What best practices should we follow to ensure that the AI models we develop or test are accurate, reliable, and free from biases? How does quality engineering contribute to this?
I can’t help but wonder…How much sensitive data do we estimate is already out there, that folks have input in error or out of ignorance, which if exposed to the public would maybe even take down entire companies?