Broadband speed checker
Published by Pulse (SearchSwitchSave.com). Reviewed April 2026 by the UKSpeedTest editorial team led by Dr Alex J Martin-Smith.
Use this broadband speed checker when you need an honest view of what your connection is delivering right now, not the marketing maximum on a package page. Pulse shows download speed, latency, and jitter in one place, so you can decide whether to troubleshoot your setup, collect better evidence, or compare alternatives. The strongest decisions come from fair repeated tests, not one isolated result.
Who this page is for
- You are experiencing buffering, lag, or unstable calls and want to verify if the connection is the likely cause.
- You are considering an upgrade or switch and want measured evidence from your real setup first.
- You need a clear, repeatable testing method before raising a support complaint with your provider.
Definitions you should know before testing
- Checker result: a measurement snapshot for this browser session, not a legal verdict by itself.
- Fair test: repeated runs with known conditions (device, room, time, wired or Wi-Fi), so results are comparable.
- Action threshold: the point where repeated, consistent underperformance starts justifying escalation.
Real UK scenarios and what they show
In Oxford, a home worker saw lunchtime speeds that looked fine but evening meetings dropped repeatedly. By tracking checker runs over one week with notes on room, device, and time, the pattern became obvious: high evening jitter and weaker throughput on Wi-Fi, while wired performance stayed steadier. That evidence shifted the fix from “upgrade immediately” to “stabilise home wireless first”, avoiding unnecessary cost.
In Hull, a family assumed a package downgrade had happened because streaming quality collapsed during peak hours. The checker logs showed that two consoles and automatic cloud backups saturated capacity at exactly the same time each night. Staggering updates and moving one console to Ethernet improved stability enough that they delayed switching provider.
Step-by-step: run a fair test and get useful evidence
- Start with one controlled baseline near your router; use Ethernet if possible.
- Repeat from your normal usage location on Wi-Fi so you capture real experience.
- Test at different times, especially when problems usually happen.
- Track each run with notes: device, location, and household activity.
- Escalate only after patterns are clear and repeatable.
How to interpret your result without guesswork
Start by comparing your result to your own baseline and your own usage needs, not generic speed brag numbers. If your home office calls are stable and file syncs complete on time, a lower figure may still be fit for purpose. If everyday usage breaks down, then the absolute number matters less than repeatable evidence of instability. Track at least a few runs over different times, especially when issues are most visible.
When results differ between rooms, this usually points to local wireless factors. When results are consistently weak even on controlled runs, you have a stronger case to escalate. Keep your notes practical: date, time, device, connection method, and what else was active on the network. This simple discipline turns anecdotal frustration into a credible troubleshooting record.
What to do next: troubleshoot, escalate, or switch
First, apply low-risk fixes: reposition the hub, reduce local interference, pause background traffic during critical tasks, and retest after each change. Second, if performance remains poor on fair repeat runs, contact your provider with a concise summary and your log. Third, if outcomes stay weak after support steps, use comparison and rights pages to evaluate whether a package change or provider switch is justified for your usage profile.
Mistakes that make speed evidence weak
The most common mistake is mixing too many variables in one comparison. If you change device, room, and time all at once, you cannot tell what caused the difference. Another common issue is testing while background updates are active, then treating that run as representative. You also weaken your case if you keep no notes and rely on memory. A provider support agent can only act on what is documented, so clear notes are not bureaucracy; they are leverage.
A second mistake is making binary decisions from one number, such as immediately upgrading package or immediately blaming line faults. In practice, users usually need a sequence: first isolate local factors, then compare repeated baselines, then escalate with evidence. This process sounds slower, but it usually saves time, money, and frustration because you avoid dead-end support calls and unnecessary contract changes.
Evidence checklist before opening a complaint
- At least three to six runs across different days, including problem periods.
- One controlled baseline run with minimal local variables.
- Notes on device, location, wired/Wi-Fi setup, and concurrent high-demand usage.
- Screenshots or copied results saved with clear timestamps.
- A concise summary of impact on real activities (calls, streaming, work tools, gaming).
Final quality check before changing package
Before committing to a new tariff, run one final comparison cycle so your decision is grounded in current evidence rather than historical frustration. Include at least one run in your most problematic time window and one run in a quieter period. If poor performance persists across both and remains visible in real tasks, the upgrade or switch case is much stronger. If the pattern is mostly room-specific or device-specific, local optimisation may still deliver better value than a contract change.
This final check also gives you a useful baseline for after-change validation. If you do switch provider or package, rerun the same method in the same rooms and times. That way you can confirm whether the change delivered meaningful improvement in stability and responsiveness, not just a temporary first-day speed spike.
Pulse measures download speed, latency, and jitter. Upload speed is not measured in the current release.
Run the right next check
Use this guide with the live Pulse test, then check how Pulse measures your speed so your interpretation is consistent.
If you want a second benchmark, review UK speed test comparison for when UKSpeedTest, Ookla, Fast.com, or Google is the best fit.
Related guides
- How to run an accurate broadband speed test
- Why speed tests can be slower than your package
- Slow broadband rights in the UK
- Broadband speed by provider
Useful tools from the FBRE network
If you want a second opinion or next-step tools, try HowFast for an additional speed-check perspective, Laggy for latency-focused checks, Broadband Map for postcode availability context, and BroadbandSwitch.uk when you are comparing deals before switching.
You can browse the wider site list at FBRE.uk.
FAQ
Is one broadband speed check enough to prove a fault?
Usually no. One run can be useful as a quick signal, but providers and engineers generally look for patterns over time. The strongest case combines repeated checks, consistent setup notes, and at least one wired baseline that removes in-home Wi-Fi variability. This makes your evidence harder to dismiss and helps support teams route your issue correctly.
Should I use Wi-Fi or Ethernet for checker runs?
Use both, but for different reasons. Ethernet gives a cleaner baseline for what your line can deliver to a fixed device, while Wi-Fi captures your lived experience in the room where you work or stream. Comparing the two tells you whether the bottleneck is likely in-home wireless conditions or a broader connection issue.
Can Pulse decide if my ISP has breached a contract?
No single online checker can determine contractual outcomes on its own. Pulse helps you gather consistent technical evidence so your conversations with your provider are clearer. Contract rights depend on your package terms, provider commitments, and formal complaint process. Pair test logs with provider communication records for the strongest position.
Why are evening checks often lower than daytime checks?
Evenings combine heavier household demand, more neighbouring Wi-Fi activity, and sometimes wider network contention. That does not automatically mean your line is faulty, but it can affect user experience. Logging time-stamped results helps separate a temporary peak-time dip from a persistent underperformance issue that deserves escalation.