Discussion On How to Start Unf**king Your Tests with Filip Hric | Testμ 2024

Hey ,

If you are a tester, you’ve probably used coverage reports not just to aim for 100% coverage but to identify critical areas that lack tests. For you, it’s about strategic coverage—ensuring that essential paths and edge cases are tested rather than inflating metrics. You use these reports to guide discussions on where to focus testing efforts.

Hey guys,

Being in the QE field, I have used chaos engineering to reveal hidden issues by stressing the system in unexpected ways. While it can indeed expose flaky tests, you’ve likely also seen how it can introduce flakiness if not controlled carefully. The key is to define clear boundaries and analyze outcomes critically.

Hi,

If you are developers or testers, based on your pragmatic approach, start by aligning your testing strategy to business needs. Focus on building reliable, maintainable tests and emphasize collaboration with developers. Using tools that provide actionable insights and staying adaptable through continuous learning are strategies that resonate with your methodology.

Hey,

Being a QAi have likely seen AI tools like Testim or Mabl help in identifying flaky tests by analyzing historical data and suggesting optimizations. They can be particularly useful in maintaining element locators and reducing false positives, which aligns well with your goal of minimizing manual maintenance efforts.

Hi,

i would say that given your interest in leveraging AI, you might favor tools that auto-heal broken locators or suggest test optimizations based on failure patterns. By using AI to detect trends in flakiness, you’ve probably reduced manual intervention and focused on strategic improvements.

Hi ,

I believe you value balancing current expertise with continuous learning. You’ve likely invested time in understanding new tools and frameworks, which helps you adapt your approach as testing landscapes evolve. Participating in community discussions and workshops has probably been part of your upskilling strategy.

Hi,

From your experience as a tester, a thorough audit of your test suite to identify high-maintenance tests is crucial. You might use tools like SonarQube for static analysis and Cypress Dashboard for flaky test tracking. Focusing on reducing dependencies and improving data management has probably been effective in your projects.

Hi,

Just sharing my thoughts here, you might have probably noticed that many testers blame tool limitations for flakiness when it often stems from environmental factors or poorly written tests. You’ve advocated for focusing on root causes, like synchronization issues or unstable test data, rather than switching tools hastily.

Being a tester, I say that you might approach this by building more resilient tests using strategies like dynamic locators or abstraction layers (POM or Component Model). You’ve likely found that staying ahead of changes through regular communication with development teams minimizes disruptions and keeps your tests stable.

As a QA, to resolve this, i guide my team to probably look for signs like slow test execution, high flakiness, and difficulty adding new tests. Identifying root causes involves analyzing logs, understanding test dependencies, and reviewing test coverage to ensure it aligns with current application behavior.

I believe that as a tester, you may have used automation testing tools, but here are a few tools like SonarQube for code quality checks and GitHub that will help you. Test tools actions are probably part of your automated test health monitoring strategy. Implementing pre-merge checks and regular test audits has likely helped you maintain test suite health over time.

Hope this helps :slight_smile:

Hi,

As a developer and tester, you’ve probably experienced firsthand the benefits of stabilizing tests. For example, when you tackled flakiness in a CI pipeline, it likely improved the team’s trust in automated tests and reduced release cycle times, as fewer false positives meant quicker feedback loops.

Hey ,

From your experience, timing issues and unstable selectors are frequent culprits. You’ve seen how unreliable synchronization between tests and UI changes, as well as unhandled async operations, can cause tests to fail sporadically.

As I am into testing, I understand test data management can be challenging, and I believe that you’ve probably experienced that inconsistent test data leads to unreliable test results. By using dedicated test environments and data seeding, you must ensure that tests have access to the right data, avoiding shared environments that can introduce unpredictable behavior.

Hope this helped.

As I have a very good understanding of flaky tests and based on your efforts, achieving a 90-95% stability rate seems realistic. While some flakiness is inevitable, continuously monitoring and optimizing test cases based on feedback and historical trends has probably helped you maintain a high level of stability.

Hope I was able to help you :slight_smile:

Hey,

You’ve likely used monitoring tools and correlation of test failures with application logs to identify whether issues stem from the app itself. If tests fail under consistent app conditions, it’s probably time to look into performance or integration issues rather than blaming test design.

Hey,

In your work with these tools, you must have found that brittle selectors and asynchronous operations are common causes of flakiness. Using robust strategies for element identification and better handling of async events has likely reduced these issues.

Hope this help :slight_smile:

Hi,

You might see value in these tools for teams without coding expertise, as they can enable business users to contribute to test automation. However, you also recognize their limitations in handling complex scenarios and the risk of vendor lock-in, which could impact scalability and flexibility.

Hope this was helpful :slight_smile: