Having the technical capability to accurately measure bandwidth and network performance right from the Linux shell is an invaluable asset for any system administrator. This guide explores common command line utilities that can assist in diagnosing connectivity issues, benchmarking throughput, and monitoring internet speeds over time.
Fast-cli – Leveraging Netflix‘s Global CDN
Fast-cli offers one of the simplest interfaces for testing internet speeds directly from terminal. Under the hood, it conducts a series of connectivity tests to Netflix‘s content delivery network (CDN) servers:
$ sudo apt update
$ sudo apt install npm
$ sudo npm install -g fast-cli
With installation complete, check speeds with:
$ fast --upload --download

Internally, fast.com evaluates upload and download performance by establishing TCP connections to Netflix servers and measuring transfer rates. Netflix has positioned CDN nodes in over 190 countries to facilitate responsive streaming worldwide. Leveraging these optimized endpoints often provides an accurate measure of real-world speeds when accessing popular content.
Beyond simplicity, fast-cli allows scripting of automated scheduled tests since output contains machine-readable JSON data. The lack of customizable options also facilitates straightforward comparisons across time and networks.
speedtest-cli – Harnessing Speedtest.net Infrastructure
The speedtest-cli tool developed by Ookla builds on speedtest.net‘s extensive network of public-facing test servers. Installation is done via:
$ wget -O speedtest-cli https://raw.githubusercontent.com/sivel/speedtest-cli/master/speedtest.py
$ chmod +x speedtest-cli
Basic invocation:
$ ./speedtest-cli --simple

The presented results include vital statistics like latency, download speed, and upload speed. By default, the test server with lowest latency is selected automatically from thousands of available locations. Manual server selection is also possible via parameters like --server-id.
The comprehensive nature of speedtest.net‘s infrastructure across ISPs combined with flexibility in options makes speedtest-cli an excellent all-purpose network performance tool. The major downside is lack of control compared to other solutions.
wget – Testing Real File Transfer Speeds
The wget command line program can measure download speeds by actually fetching files from a web server. It comes pre-installed on most distributions:
$ wget -O /dev/null http://speedtest.wdc01.softlayer.com/downloads/test100.zip

This test works by attempting to download a file (100MB in the above case) from the specified server while showing real-time progress. The /dev/null output discards the file after download rather than storing locally.
Benefits of using wget for speed tests include ability to benchmark any standard web/FTP server, avoid variability of third-party test services and fine tune parameters like number of connections. The core limitation is measuring only download capacity.
For example, testing against local Nginx/Apache servers under different loads provides realistic maximum throughputs. Tuning OS limits on number of open files/network buffers can also allow saturating available capacity.
youtube-dl – Segmented Testing by Downloading Videos
The youtube-dl utility is immensely useful for pulling video and audio content from YouTube and other streaming platforms. Coincidentally, it also provides an excellent mechanism for testing internet speeds.
After installation via package manager or pip, test with:
$ youtube-dl -f best --no-cache-dir -o /dev/null https://www.youtube.com/watch?v=video_id

The above wgets the highest available quality version of the video to /dev/null i.e. discards output while showing progress. Behind the scenes, YouTube dynamically adjusts sent bitrates based on real-time throughput estimates. This segmentation allows maxing out available capacity.
Advantages are real-world results, ability to login and test speeds to non-public or region restricted content. Downside is dependence on intermediate servers like CDNs and accuracy suffers if they are not optimized.
curl – GET File Transfer Testing
The venerable curl program can transfer content to/from servers using various protocols. We can easily test download speeds by fetching a file:
$ curl -o /dev/null http://cachefly.cachefly.net/100mb.test
By default curl displays speeds in Kb/sec. To convert to MB/sec, simply divide result by 1024. Like wget, an advantage of curl is flexibility in testing any publicly accessibly HTTP/FTP server limited only by connectivity.
Fine tuning buffer sizes and parallel connections allows saturating available capacity exactly like a real world file transfer use case. Measurements also avoid variability caused by external speed test services redirecting traffic inconsistently.
aria2 – Multi-Connection Download Testing
The aria2 command line utility supports downloading files over HTTP(S) + BitTorrent protocols using multiple connections:
$ sudo apt install aria2
Basic download speed test using 16 threads:
$ aria2c -x 16 -s 16 https://testserver.com/large_file

Specifying 16 connections with the minimum split size of 1M forces aria2 to open multiple TCP streams. This demonstrates full concurrency supported by the network.
Additional aria2 capabilities like download resuming and BitTorrent extensions provide operators fine grained control for engineering networks. Load balancing across proxies at scale and optimizing throughput are thus possible.
Key Network Performance Concepts
Now that we‘ve explored various tools, it helps to understand key technical concepts that determine speed test measurements and real-world file transfer rates.
Latency – Time taken for a minimal sized packet to traverse the network. Abbreviated as ping or RTT. Key determinants include physical distance to test server, router response times and congestion.
Jitter – Variation in latency measurements noticed over the course of multiple ping packets being sent. Usually caused by uneven loads across network equipment.
Packet Loss % – Percentage of ping test packets that fail to receive a response. Equipment failures, signal degradation, congestion can trigger up to 100% loss.
MTU – Maximum transmission unit or largest packet size that can transmitted without fragmentation. Typical values are 1500 bytes on ethernet or 1400 bytes on PPP links.
Throughput – Actual amount of usable data transferred per unit of time, usually per second. Limited by slowest bottleneck between source, destination and intermediate hops.
Impact of Protocol Parameters on Speed Test Results
Network communication relies on standardized protocols that dictate how data is transmitted between end points. Configuration parameters exposed by these protocols also directly impact observable speeds.
TCP Window Size -TCP transfers bundles of bytes defined by a moving window size that continuously keeps data flowing between hosts without overwhelming receivers. Larger windows (eg. 1GB) aid faster transfers.
UDP Packet Size – Unlike TCP, UDP does not guarantee delivery so packets have to limited to sizes unlikely to be dropped. Typical sizes are ~1500 bytes due to common MTU sizes on networks.
Error Correction – Real-time streaming protocols like QUIC selectively usage redundancy and checksums to enable uninterrupted experiences in spite of packet loss. This resiliency lowers effective data rates.
Compression – Video/audio encoding algorithms selectively reduce quality or frame rates to fit bandwidth constraints. Effective throughput may therefore fluctuate independently of network capacity alone.
Client Versus Server-Limited Speed Test Interpretation

Download speed test results can be limited by either client capacity or server-side constraints as illustrated above.
Tools like fast-cli and speedtest-cli attempt to eliminate server-side bottlenecks by provisioning extensive capacity on backend infrastructure. But this assumes clients routing traffic to these services across diverse network paths can fully utilize this bandwidth.
On the other hand, curl and wget testing is likely measuring the upload capacity exposed by web servers. These self-hosted tools essentially skips measuring complex intermediate routing.
Ideally, capacity measured from client to a test service should approximately match server to client transfers for the same endpoints. If mismatches occur, further analysis into routing and peering relationships is warranted.
Getting Optimal Accuracy Out of Command Line Speed Tests
Here are some general best practices to follow when utilizing the above tools for benchmarking internet speeds:
- Use endpoints close to the actual servers of interest to minimze variances in routing.
- Keep test durations to at least 30 seconds, or transfer sizes to > 50MB. Shorter tests risk sampling ephemeral speed bursts.
- For consistent monitoring, schedule periodic automated tests using cron jobs rather than relying on spot checks.
- Repeat measurements multiple times and apply smoothing based on averages rather than peaks.
- Capture test results in persistent files for correlating with other system/network statistics over timelines.
- Check speeds against different endpoints to isolate systemic bottlenecks from transient ISP or first hop issues.
Adopting above practices helps minimize variability and noise that could otherwise mask optimal capabilities. All tools also facilitate scripting for automating large scale performance monitoring.
Conclusion
In closing, actively monitoring bandwidth and having visibility into end-to-end network metrics is pivotal for technology professionals. This guide should provide kernel and system administrators a comprehensive starting point for leveraging essential command line utilities native to Linux towards establishing operational insights.
The tools and protocols outlined have unique strengths making them suitable for specific use cases. Over time, hands-on experience determining typical throughput baselines and boundaries helps intuitively reason about connectivity issues.
Eventually, capacity planning and optimizing infrastructure for scale becomes second nature. Mastering these foundational network analysis skills marks an important milestone in advancing Linux administration expertise.


