Discover how Priyanka and her team tackle the challenge of maintaining top-notch quality while releasing software multiple times a day. Learn about innovative strategies and automation techniques that keep pace with hyper-growth and ensure seamless performance across global applications.
From overcoming complex issues like CORS to ensuring accessibility during data center migrations, Priyanka will share insights on CI/CD pipelines, automated testing frameworks, and real-time monitoring that help achieve 100% confidence in every release.
Don’t miss this chance to gain expert knowledge on balancing speed with quality and driving excellence in your software releases! Register Now.
Got questions? Drop them below in the thread!
Hi there,
If you couldn’t catch the session live, don’t worry! You can watch the recording here:
Now, let’s see some of the unanswered questions!
What metrics are most valuable for assessing the success of quality engineering efforts at scale?
What are some key lessons learned from scaling quality engineering that can help avoid common pitfalls?
What tools and frameworks are essential for managing and maintaining high-quality standards at a massive scale?
How do you approach testing in production environments while ensuring minimal disruption for millions of users?
How do you maintain a robust feedback loop between development and quality teams as the scale of users increases?
How do you prepare your quality engineering processes to handle unexpected traffic surges, such as during product launches or viral events?
Frequent releases increase the risk of regressions where a new update inadvertently breaks something that was previously working. Even with automated regression testing, it’s impossible to guarantee that all scenarios are covered. Your views, please.
Can you share any techniques that have been particularly effective in achieving near-100% confidence in releases despite the rapid development cycles?
Can you share an example of a significant challenge or failure you faced while scaling quality engineering, and how you turned that experience into a learning opportunity?
How do you maintain the sustainability of your quality engineering practices as your user base continues to grow?
When I see pen testing mentioned, I sometimes think of failure injection testing and chaos engineering as well. Can this spectrum of approaches support the scaling/growth of QE programs?
What are the best practices for performance testing at scale to ensure that applications remain stable and responsive under high user loads?
How are AI and machine learning transforming quality engineering practices, especially in the context of scaling for millions of users?
What best practices we need in large scale organisation??
Are there impacts to adherence to regulations (like HIPAA for example) when scaling quality engineering to a worldwide and millions-large audience? If so, can these impacts be avoided?
What are the risks and downsides to quality engineering outcomes if scaling too quickly (too fast, too furiously)?
What are the biggest challenges in scaling quality engineering for applications that serve millions of users worldwide, and how can these be effectively managed?
In a fast-paced environment, how do you balance the need for rapid development with the requirement to maintain high-quality standards across the board?