LoadForge now offers headless Chrome browser testing powered by Playwright, allowing you to run real Chrome browsers instead of the standard Locust method of testing. This feature enables you to simulate actual user interactions with your website, measure performance metrics like Largest Contentful Paint (LCP), and validate the user experience directly. This powerful functionality is primarily designed for QA teams, to validate site functionality at scale and not for load testing.

How It Works

Browser testing in LoadForge uses Playwright under the hood to control headless Chrome instances. Unlike traditional load testing with virtual users, browser testing launches actual browser instances that:

  • Render your website fully, including JavaScript execution
  • Process CSS and display the page as a real user would see it
  • Execute user interactions like clicks, form submissions, and navigation
  • Measure Core Web Vitals and other performance metrics

Each LoadForge generator can spawn multiple browser instances, allowing you to simulate multiple real users simultaneously interacting with your website.

Implementation

To use browser testing in LoadForge, you’ll need to create a test script using the PlaywrightUser class from locust_plugins.users.playwright. This class provides the foundation for browser-based testing, with methods for navigating to pages, interacting with elements, and measuring performance.

Key Components

  • PlaywrightUser: The base class for browser testing
  • PageWithRetry: A wrapper around Playwright’s Page object with retry capabilities
  • pw decorator: Used to mark tasks that use Playwright
  • event context manager: Used to measure and report the duration of specific actions

Example Scripts

Tip: We have a number of other examples available here.

Basic Site Navigation

This example demonstrates how to navigate through a website and measure the performance of different actions:

from locust import task, run_single_user
from locust_plugins.users.playwright import PageWithRetry, PlaywrightUser, pw, event
import time

class Manual(PlaywrightUser):
    multiplier = 5

    @task
    @pw
    async def loadforge(self, page: PageWithRetry):
        try:
            async with event(self, "Load home"):
                await page.goto("/")
            async with event(self, "Go to articles"):
                async with page.expect_navigation(wait_until="domcontentloaded"):
                    await page.click('a:has-text("Articles")')
        except:
            pass

Performance Monitoring with LCP Check

This example shows how to measure the Largest Contentful Paint (LCP) and report pass/fail results based on a threshold:

from locust import task, run_single_user
from locust_plugins.users.playwright import PageWithRetry, PlaywrightUser, pw, event
import time

class Manual(PlaywrightUser):
    LCP_THRESHOLD = 1200  # ms

    @task
    @pw
    async def lcp_check(self, page: PageWithRetry):
        # Set up CDP network throttling
        self.client = await self.browser_context.new_cdp_session(self.page)
        await self.client.send("Network.enable")
        await self.client.send(
            "Network.emulateNetworkConditions",
            {
                "offline": False,
                "downloadThroughput": (1 * 1024 * 1024) / 8,
                "uploadThroughput": (1 * 1024 * 1024) / 8,
                "latency": 50,
            },
        )

        self.start_time = time.time()
        async with event(self, "Load up home"):
            await page.goto("/")

        await page.wait_for_timeout(1000)  # allow late paints to register

        lcp = await page.evaluate(
            """
            new Promise((resolve) => {
                new PerformanceObserver((l) => {
                    const entries = l.getEntries();
                    const largestPaintEntry = entries.at(-1);
                    resolve(largestPaintEntry.startTime);
                }).observe({
                    type: 'largest-contentful-paint',
                    buffered: true
                });
            })
            """
        )

        # Determine pass/fail based on LCP
        passed = lcp <= self.LCP_THRESHOLD
        request_name = f"LCP {'PASS' if passed else 'FAIL'}"

        self.environment.events.request.fire(
            request_type="LCP",
            name=request_name,
            start_time=self.start_time,
            response_time=lcp,
            response_length=0,
            context={**self.context()},
            url="casino_LCP",
            exception=None if passed else Exception(f"LCP too high: {lcp:.0f}ms"),
        )

Common Use Cases

  • Core Web Vitals Monitoring: Measure LCP, FID, CLS, and other performance metrics
  • User Journey Validation: Ensure critical user flows work correctly
  • Visual Regression Testing: Capture screenshots and compare them to detect visual changes
  • Accessibility Testing: Validate that your site meets accessibility standards
  • SEO Validation: Check that your site is properly optimized for search engines

Limitations

While browser testing provides valuable insights into real user experience, it comes with some limitations compared to traditional load testing:

  • Scale: A typical load generator can simulate 10,000-40,000 virtual users with the standard Locust method, but only 5-10 real browser instances per generator with browser testing
  • Resource Intensity: Browser testing requires significantly more CPU and memory resources per user
  • Purpose: Browser testing is not designed for load testing but for real user simulation, monitoring, and performance validation
  • Complexity: Browser test scripts tend to be more complex than standard HTTP request scripts

Browser testing is ideal for monitoring user experience, checking performance metrics like Lighthouse scores and Core Web Vitals, and validating critical user journeys, but it should not replace traditional load testing for stress testing your infrastructure.

Best Practices

  1. Use Appropriate Wait Times: Add realistic wait times between actions to simulate real user behavior
  2. Handle Errors Gracefully: Implement proper error handling to ensure your tests continue even if elements aren’t found
  3. Measure Specific Events: Use the event context manager to measure the performance of specific actions
  4. Set Realistic Thresholds: Define appropriate thresholds for performance metrics based on your business requirements
  5. Combine with Traditional Testing: Use both browser testing and traditional load testing for comprehensive coverage