Discussion on Houston, We Have Problems with the Queries by Andres Sacco I Testμ 2023

Devs often neglect query performance, causing issues in production.

Learn with Andres Sacco: impact of bad joins, framework-free queries, and more.

Explore a microservice scenario uncovering performance problems. :rocket: Test db access layer with tools like QuickPerf for analysis. Let’s optimize! :hammer_and_wrench:

Still not registered? Hurry up and grab your free tickets: Register Now!

If you have already registered and up for the session, feel free to post your questions in the thread below :point_down:

Here are some of the Q&As from the session!

Did you encounter any unexpected or counterintuitive performance challenges related to database interactions? How did you go about diagnosing and addressing these challenges?

Andres Sacco: He replied with yes, we did come across some unexpected performance issues with databases. To tackle them, we closely examined the queries, monitored database activity, and used performance testing tools to identify bottlenecks. Then, we optimized the queries and made necessary adjustments to enhance database performance.

In database testing, what all things are required for writing good test cases?

Andres Sacco: To write good test cases for database testing, you need to consider several things. First, understand the database schema and data flow. Then, create test cases that cover various scenarios, including data insertion, retrieval, and modification. Ensure your test cases are well-documented and cover both normal and edge cases to thoroughly validate the database’s functionality.

What steps have been taken to address and resolve the issues encountered with the queries, and how do you plan to ensure the stability and efficiency of the query system moving forward?

Andres Sacco: Address the query issues by identifying slow-performing queries and optimizing them. Also monitor the query performance regularly. To ensure stability and efficiency, we plan to continue monitoring, use performance tools, and follow best practices for query optimization.

What will be the chaos testing scenario for database testing ?

Andres Sacco: Chaos testing for databases involves intentionally causing disruptions to the database to see how it behaves under stress. We might simulate scenarios like sudden traffic spikes, server failures, or data corruption to test the database’s resilience and recovery capabilities. The goal is to uncover vulnerabilities and improve the database’s reliability.

Let’s see some of the unanswered questions:

Could you share an experience where a seemingly minor query performance issue was amplified when the microservice was scaled up? How did you prevent such issues from impacting scalability?

In database testing, what all things are required for writing good test cases?

How to write test cases from requirements, do the requirements signify the exact functionality of AUT?

What steps have been taken to address and resolve the issues encountered with the queries, and how do you plan to ensure the stability and efficiency of the query system moving forward?

What are best tools to do performance testing on database itself?

Hi there,

If you couldn’t catch the session live, don’t worry! You can watch the recording here:

Additionally, we’ve got you covered with a detailed session blog:

Sure, I can share an experience. In one project, we encountered what seemed like a minor query performance issue in a single instance of a microservice. The query ran fine during initial development and testing, but as we scaled up the microservice to handle more traffic, the performance started degrading noticeably. The seemingly minor issue became a major bottleneck.

To address this, we adopted a proactive approach to prevent such issues from impacting scalability in the future.

We implemented performance testing and profiling from the early stages of development, replicating anticipated production traffic. This helped identify and resolve performance bottlenecks at an early stage. We also fine-tuned our database queries and indexing strategies to ensure optimal performance.

Moreover, we incorporated continuous monitoring and alerting into our production environment to detect and address performance anomalies in real-time.

By taking these measures, we not only resolved the immediate issue but also established a proactive and scalable approach to performance management. It emphasized the significance of thorough testing, early identification of bottlenecks, and continuous monitoring to maintain the efficiency of microservices as they scale up.

Writing test cases from requirements is a crucial step in ensuring that the software meets its intended functionality. Requirements serve as the foundation for test case creation, but it’s essential to understand that requirements may not always detail the exact functionality of the Application Under Test (AUT).

Requirements typically describe the system’s intended behavior, features, and constraints, often at a high level. Test cases, on the other hand, break down these requirements into detailed, step-by-step instructions to verify that the system performs as expected.

When writing test cases from requirements, begin by analyzing the requirements thoroughly. Identify what the software is supposed to do and, sometimes just as importantly, what it’s not supposed to do. Develop test cases that cover different scenarios, including positive tests to verify correct behavior and negative tests to uncover potential issues.

It’s important to maintain traceability, meaning each test case should be linked back to the specific requirement it validates. This helps ensure comprehensive coverage and allows you to confirm that all aspects of the requirements have been tested.

Additionally, collaboration with the stakeholders and developers can be beneficial in clarifying any ambiguities in the requirements and refining the test cases for accuracy. Ultimately, the goal is to create test cases that thoroughly validate the software’s functionality, ensuring that it aligns with the intended requirements while also identifying potential issues and edge cases.


There are several tools available for performance testing on databases to evaluate their speed, scalability, and reliability. Some of the best tools include:

  1. Apache JMeter: JMeter is a versatile open-source tool widely used for performance testing, including database performance. It allows you to create custom test plans for simulating various database workloads.
  2. Apache Bench (ab): This is a simple command-line tool for benchmarking your web server but can also be used for basic performance testing of your database server.
  3. SQLQueryStress: This is a Windows-based tool that allows you to stress test your SQL Server database with customizable query patterns and concurrency settings.
  4. HammerDB: Specifically designed for database benchmarking, HammerDB supports various databases, including Oracle, SQL Server, PostgreSQL, and others.
  5. Gatling: While primarily used for web application performance testing, Gatling can be extended to include database performance testing scenarios. It’s known for its high scalability and real-time reporting.