Building AI Fluency: Leading Teams Through the Learning Curve | Testμ 2025

Join Kelly Vaughn, Senior Engineering Manager at Zapier, for an insightful session on Building AI Fluency: Leading Teams Through the Learning Curve.

Kelly will share real-world lessons from Zapier’s AI transformation, exploring how to move beyond surface-level tool adoption and guide teams toward true AI fluency.

Discover strategies to address resistance, create psychological safety, and balance exploration with productivity without compromising delivery.

Key Takeaways:

:heavy_check_mark: Practical frameworks for gradual AI adoption across diverse skill levels

:heavy_check_mark: Methods to foster psychological safety and reduce resistance during adoption

:heavy_check_mark: Techniques to balance learning, experimentation, and delivery commitments

:date: Don’t miss this opportunity to gain actionable strategies for leading teams through AI adoption with confidence. Register Now!

How can leaders accurately assess where their team is on the AI learning curve?

The Composer and the Orchestra: If AI is a powerful new instrument in your team’s orchestra, how do you, as the conductor, ensure they learn to play it with nuance and artistry, rather than just playing it loud? What does it sound like when they are

The Gardener’s Dilemma: Failure is often the best fertilizer for learning. How do you create a “greenhouse” where your team can safely experiment and even fail with AI, cultivating breakthroughs without the fear of a “killing frost” from management?

What baseline AI skills should every tester or developer have in the next 2–3 years, regardless of role?

What was the biggest challenge your team at Zapier faced when first adopting AI tools in testing?

What steps should organizations take to embed AI literacy as a competency for every role, not just technical positions?

With so many AI models, in the context of QE skill sets which should be first, second and third steps to enable teams to evolve and embrace AI models.

How do you train teams not just in AI usage but in responsible practices—like bias awareness, test data integrity, and transparency?

With so many AI models, in the context of QE skill sets which should be steps to enable an individual embrace AI models.

How can a culture of experimentation and safe failure be reinforced within teams (without opening doors to shadow IT), to encourage them to explore AI tools and applications without fear of negative repercussions?

What are the most effective ways to facilitate collaboration between team members with varying levels of AI fluency (such as technical specialists and non-technical colleagues)?

What are proven ways to address resistance and anxieties among team members about AI’s impact on job roles and security?

What baseline AI skills should every tester or developer have in the next 2–3 years, regardless of role?

In a rapidly advancing AI landscape, how can teams foster continuous learning and experimentation without overwhelming staff?

What advice would you give to leaders who are worried that AI adoption will slow down delivery at first?

What are the best ways for an org to identify and empower internal AI champions and create a network for knowledge sharing and peer-to-peer learning within the organization?

What are the most critical mindset or skill changes testers and engineers need to embrace as AI becomes part of daily workflows

Do you think every tester needs deep AI knowledge, or is basic fluency enough?

With so many AI models, in the context of QE skill sets which should be steps to enable an inidividual embrace AI models.