Building Core Components for Web | Test Automation Framework Development | Part II | LambdaTest

Hello Folks!

We are live with another video from Anton’s Test Automation Framework series on “Building Core Components for Web.” Watch this video to learn how to build core components for web automation. Don’t miss out!

In my own experience, I’ve found that designing a modular framework is one of the best ways to keep automation efforts maintainable and scalable. By breaking things down into distinct components, it becomes much easier to adapt and evolve as the project grows.

In my experience, creating separate classes for each web page or component really helps keep things organized. By encapsulating page-specific interactions and elements, you keep the test logic clear and separate from the page interactions. This not only improves reusability but also ensures that any updates to the UI don’t break the test scripts as long as the POM is updated.

Data-Driven Testing

Another great strategy is to externalize the test data. I’ve used everything from CSV files to databases to pass multiple datasets through the same test scripts, and it significantly boosts the adaptability and coverage of the tests. You’ll find that the flexibility this adds makes maintaining large test suites much easier.

Example

Here’s a basic implementation that I’ve used in some of my projects:

// Page Object Class
class LoginPage {
  private usernameField = '#username';
  private passwordField = '#password';
  private submitButton = '#submit';

  public async login(username: string, password: string) {
    await page.fill(this.usernameField, username);
    await page.fill(this.passwordField, password);
    await page.click(this.submitButton);
  }
}

// Test Script
test('Login Test', async () => {
  const loginPage = new LoginPage();
  await loginPage.login('user', 'password');
  // Add assertions here
});

Implementing integrated reporting and logging has been key to capturing and analyzing test results effectively. It provides both high-level overviews and detailed insights into the execution of each test case, which has helped me identify issues quickly and improve overall test quality.

Reporting:

I love using tools like Allure or generating HTML reports because they offer a comprehensive breakdown of test results. These reports include everything from screenshots to logs and test metrics, giving me all the data I need in one place to analyze the performance and health of my tests.

Logging:

For me, integrating logging frameworks like Log4j or Winston has been a must for capturing detailed execution logs, errors, and debug information. Having these logs available during test execution not only makes troubleshooting smoother but also gives a clear understanding of where things might have gone wrong.

Here’s an example of how I implement both reporting and logging:

import { Reporter } from '@jest/reporters';
import { Logger } from 'winston';

const logger = new Logger({
  transports: [new transports.Console()]
});

test('Sample Test', async () => {
  try {
    // Test code
  } catch (error) {
    logger.error('Test failed', { error });
    throw error;
  }
});

Integrating your test automation framework with CI/CD pipelines has been a crucial part of ensuring that my test execution is both automated and consistent across different stages of development. Whenever I’m working with tools like Jenkins, GitLab CI, or GitHub Actions, this integration helps streamline the entire process—from code commits to testing and deployment.

Key Steps I Follow:

  1. CI Tools: I configure CI tools like GitHub Actions to run my test suite on code commits or pull requests. Having the tests execute automatically when changes are made gives me confidence that any issues are caught early on, without manual intervention.
  2. Environment Management: Managing different environments—such as development, staging, or production—is another important aspect. I set up environment configurations and variables within the pipeline to handle these environments seamlessly. This ensures that tests are run in the correct context, whether it’s for a quick development check or a pre-production deployment.

Implementation Example:

Here’s a basic GitHub Actions workflow that I’ve used in the past for automating test execution:

# GitHub Actions Workflow Configuration
name: Run Tests

on:
  push:
    branches:
      - main

jobs:
  test:
    runs-on: ubuntu-latest
    steps:
      - name: Checkout code
        uses: actions/checkout@v2
      - name: Install dependencies
        run: npm install
      - name: Run tests
        run: npm test

By integrating my automation framework into CI/CD pipelines, I’ve been able to automate testing from the moment code is pushed to the main branch. The pipeline takes care of everything—from checking out the code to installing dependencies and running the tests. This not only saves time but ensures a level of consistency that’s crucial in catching bugs early and keeping the release cycle smooth.