How to keep up with fast moving AI trends?
What new KPIs will define success for QA teams in the AI era?
How does integrating AI/GenAI into QA processes (unfinished question fragment in source)?
Which apps are best for testing all aspects of customized software?
AI can only be used as test automation accelerators, it cannot make testing autonomous. I would like to know your insights on this.
How does the architecture support environment-aware execution (different OS versions, devices, cloud vs local)?
What’s one small win you’ve had using AI in your QA work?
What KPIs should be tracked? (cutoff in your text, but included here)
I’d say the best way to begin your GenAI journey in quality engineering is to start small and focused. Look for those testing tasks that are repetitive and take up a lot of your team’s time, things like test data generation, regression checks, or log analysis. Try introducing GenAI automation in just one of those areas first. Once you see how it improves efficiency and reduces manual effort, it becomes much easier to expand and get the team on board. Even one successful workflow can clearly show the real value GenAI brings to your testing process.
In the early stages of adopting AI for testing, it’s important for quality engineers to start by understanding a few core things — what AI can and can’t do, how its decisions can be explained, and how much data quality matters. Many teams get slowed down because they expect AI to handle everything automatically or even replace testers altogether. In reality, AI works best when it’s used to support human judgment, not replace it. The goal should be to start small, focus on areas where AI can make the most impact, like speeding up repetitive tasks or analyzing large sets of data, and grow from there as your understanding deepens.
Hey Everyone,
In the “Crawl” phase, it’s all about building a solid understanding of the basics before jumping deep into AI adoption. Testers should start by learning how prompting works, how AI models use data to make decisions, and what model inference actually means in the context of testing.
One big misconception many teams have is thinking that AI will instantly deliver perfect test coverage or do all the heavy lifting on its own. In reality, it’s more like a co-pilot, it supports your testing process, helps you move faster, and reduces repetitive work, but it still needs your guidance and expertise to get the best results.
I believe AI will act more as a powerful co-pilot for testers rather than replacing them. It’s here to make our jobs smarter and faster, not take them away. In the next five years, the most valuable QA skills will revolve around understanding how to work with AI tools, knowing how to interpret their insights, question their results, and apply them effectively to real-world scenarios. Alongside that, strong domain knowledge and critical thinking will become even more important than just writing repetitive test scripts. Testers who can combine technical understanding with analytical and contextual thinking will really stand out.
If AI takes care of the repetitive QA tasks, I’d focus more on exploratory and scenario-based testing, the kind that really needs human intuition and creativity. I’d also spend more time on designing better quality strategies and improving overall test coverage. With GenAI stepping in, the QA role will evolve into a more analytical one, where we guide, review, and refine what AI produces, ensuring the final outcome truly meets the user and business needs.
Hey everyone! I had the chance to attend Subba Lakshmi Ramaswamy’s session on “AI & GenAI in Quality Engineering: Crawl, Walk, Run” at Testμ 2025, and as someone who’s part of the LambdaTest community helping testers with their day-to-day challenges, this topic really hit home.
When it comes to measuring whether AI-driven automation is truly improving your testing efforts, it’s all about tracking the right KPIs. Here are a few that matter most:
- Test coverage: See if your automation is helping you cover more areas of your app—especially the ones that often get skipped in manual testing.
- Defect detection rate: Track how effectively your tests are catching bugs early in the cycle.
- Time saved: Measure how much repetitive manual work your team no longer needs to do thanks to automation.
- Reduction in flakiness: If your test runs are becoming more stable and reliable, that’s a big win.
These metrics go beyond just counting how many tests you’ve run, they show real progress in quality, efficiency, and confidence in your testing process.
A lot of teams are held back by a few common things, they’re not sure where to start, don’t have enough in-house expertise, or can’t clearly see the ROI yet. The best way to move forward is to start small. Try running a pilot project in a low-risk area of your application to see how GenAI can actually make a difference. Once you see real results, it becomes much easier to expand it across the QA process with confidence.
Hey everyone! I’m Vishal, one of the attendees at Testμ 2025 and a community contributor at LambdaTest. I had the chance to attend the insightful session by Subba Lakshmi Ramaswamy on “AI & GenAI in Quality Engineering: Crawl, Walk, Run.”
When it comes to keeping AI in testing ethical, explainable, and bias-free—especially at scale—it really comes down to how you start and grow your adoption. Many teams struggle because they either don’t have the right expertise, the ROI isn’t clear yet, or integrating AI tools into existing processes feels complicated.
A practical way forward is to start small, maybe run a pilot project in a low-risk area of your testing pipeline. This lets you experiment safely, learn what works (and what doesn’t), and build confidence within the team. Once you can clearly show the value, like improved efficiency or better test coverage, it becomes much easier to scale responsibly while keeping ethics and transparency at the core.
That’s a really good question, and one that many teams are starting to think about as they explore AI-driven workflows. From my perspective, a hybrid setup makes the most sense.
Having a team-managed agent ensures there’s a consistent foundation the same tone, quality standards, and processes everyone follows. But at the same time, giving each tester their own personalized style file adds flexibility. Every tester has their own strengths and ways of approaching problems, and this setup lets them bring that individuality while still staying aligned with the team’s overall direction.
In short, it’s about finding the right balance, keeping consistency where it matters, and allowing creativity where it adds value.
When you’re picking AI or GenAI testing tools, start with what already works in your setup. Make sure the tool integrates smoothly with your existing CI/CD pipeline and fits well into your current workflows, this saves a lot of headaches later. Think about scalability too; the tool should be able to grow with your team and projects.
Another big one is explainability. You’ll want tools that let you understand why a certain decision or result was made, not just what the outcome is. And most importantly, choose something that aligns with your long-term quality strategy, not just what looks good today.
Be cautious of tools that claim “complete automation” without any human oversight. In reality, those often sound great in demos but fall short in real-world testing. It’s better to go for a balanced approach where AI supports your engineers rather than replaces them.
That’s a great question, and something every QA team should start thinking about right now. The best way to prepare your team for the AI-driven future of testing is to build a strong foundation.
Start by helping everyone get comfortable with data literacy understanding how data is collected, cleaned, and used. Then move on to the basics of AI and machine learning, so your team knows what’s actually happening behind the tools they’ll be using.
It’s also worth spending time on prompt engineering learning how to ask the right questions or give the right inputs to get useful results from AI systems. And just as important, teach your team how to critically evaluate AI outputs instead of taking them at face value.
The most effective way to do all this? Keep it hands-on. Run small workshops, experiments, or pilot projects where people can explore, fail fast, and learn by doing. That’s how you build real confidence and practical skills that stick.
Hey,
keeping AI-driven test suites up to date as the app evolves is key to maintaining accuracy. The best way to do this is by setting up continuous feedback loops that feed real production data and user behavior back into your testing system. This way, the system can automatically detect changes in patterns or workflows.
You can also enable automated test case updates, where the test cases adjust themselves based on detected UI or API changes instead of relying on manual edits. And finally, periodic retraining of your models ensures that they stay aligned with the latest app versions and user scenarios.
Together, these mechanisms help your testing setup stay smart and responsive, learning from every release and update without needing constant human input.