Methodology and limitations

How the report should be interpreted

The Website Quality Report summarizes a point-in-time scan of 15,542 public websites. The goal is to identify broad measurable patterns, not to assign blame, shame individual businesses, or explain the cause of any single result.

Dataset scope

This dataset summarizes a point-in-time scan of 15,542 public websites. Of those, 15,541 scans completed successfully and 1 scan failed.

Completed scans were included in score analysis. Failed scans were retained in the dataset but excluded from score distribution calculations.

Measurements captured

The scan measured website quality signals including overall Site Score, performance score, mobile score, load time, accessibility issues, and structure health.

View the full list of measured checks →

Average Performance Score 75
Average Mobile Score 67
Average Load Time 7.19 seconds
Average Accessibility Issues Detected 7.44
Average Structure Health Score 72.6

Point-in-time results

Scores should be interpreted as point-in-time measurements, not permanent fixed values. Website performance can change between scans due to server response, network conditions, third-party scripts, redirects, caching, temporary outages, and live site changes.

Validation sample

A small validation sample was rescanned using a second scanning system. The rescans showed directional consistency with the original results, with most sampled scores falling within a small variance range.

One larger variance was observed, which is consistent with live website testing where load conditions and third-party dependencies can change between scans.

What this dataset does not prove

This dataset does not prove why a website performed poorly. It does not prove whether a site was professionally built, how much was paid for it, or who built it.

The purpose of this dataset is to identify broad measurable patterns across a large sample of public websites, not to shame individual businesses or assign blame to specific vendors.