The Linux `time` Command (with Practical Examples and Practical Pitfalls)

When something feels “slow” on Linux, my first question is rarely “how fast is my CPU?”—it’s “where is the time going?” Are you waiting on disk I/O, DNS, a remote API, a mutex, page cache misses, or just burning CPU in a hot loop?\n\nThat’s why I still reach for time even in 2026. It’s not a profiler, and it won’t tell you why something is slow, but it gives you an immediate, trustworthy scoreboard: wall-clock elapsed time (real), CPU time spent in user mode (user), and CPU time spent in the kernel (sys). With those three numbers, you can usually classify a workload in under a minute and decide what to do next.\n\nIn the rest of this post I’ll show you how I read time output, how I choose between the shell’s time and GNU /usr/bin/time, and a set of copy‑paste recipes I use for scripts, pipelines, logging, and repeatable benchmarks—plus the common traps that make timing results misleading.\n\n## What time Actually Measures (and What It Doesn’t)\n\nAt its core, time runs a command and prints a summary when the command finishes.\n\n- Real time (real): elapsed wall-clock time. This includes waiting: disk, network, scheduler delays, and time your process is blocked.\n- User time (user): CPU seconds spent executing your program in user space.\n- System time (sys): CPU seconds spent in kernel space on behalf of your process (syscalls, page faults, filesystem, networking, etc.).\n\nA quick mental model I use:\n\n- If real is high but user+sys is low, you’re mostly waiting.\n- If user is high, you’re mostly doing compute in user space.\n- If sys is high, the kernel is doing a lot of work for you (I/O heavy, many syscalls, context switching, lots of small reads/writes, etc.).\n\nTwo important limitations:\n\n1) time is not a profiler. It won’t tell you which function is hot, which lock is contended, or which syscall dominates. It only tells you how much time was spent, not where.\n\n2) time measures what you run, not the whole system. If your command spawns children, time will generally include their CPU time too (depending on which time you use and how it’s invoked), but it’s still scoped to that command’s process tree, not “everything the machine did during that interval.”\n\nA third limitation that matters in the real world: time is only as meaningful as the workload you give it. Timing a command that runs in 8 milliseconds is usually measuring noise (scheduler jitter, CPU frequency changes, cache effects), not your change. In those cases I either increase the work per run (bigger input, more iterations) or use a benchmark harness.\n\n## Choosing the Right time: Shell Keyword vs /usr/bin/time\n\nOn many Linux systems you have two “times”:\n\n- A shell keyword (built into bash, zsh, etc.)\n- An external binary, commonly GNU time at /usr/bin/time\n\nThis matters because options and formatting differ.\n\nI start with this:\n\n type -a time\n\nYou might see output like:\n\n- time is a shell keyword\n- time is /usr/bin/time\n\nIf I want portability and simple output, I often use the shell keyword. If I want rich formatting (memory, exit status, custom format strings), I reach for GNU /usr/bin/time.\n\nTwo reliable patterns:\n\n- Force the external binary:\n\n /usr/bin/time -p sleep 1\n\n- Force the shell keyword in many shells:\n\n time -p sleep 1\n\nIf there’s ambiguity (or you’re writing scripts that run under different shells), I recommend being explicit with /usr/bin/time.\n\nOne more nuance: aliases and functions can also shadow things. If you or your dotfiles have an alias named time, type -a will show it. In scripts, I avoid relying on interactive shell configuration and prefer explicit paths for anything that affects parsing or output.\n\n## Reading the Output: real, user, sys (with mental models)\n\nHere’s the simplest “dummy job” example:\n\n time sleep 3\n\nYou’ll see something like:\n\n- real around 3 seconds\n- user near 0\n- sys near 0\n\nThat’s exactly what you want: sleep mostly waits.\n\n### A CPU-bound example\n\nIf you want a quick CPU-bound workload (no fake placeholders), I often use hashing a block of random data. This is not a perfect benchmark, but it’s good for reading time.\n\n head -c 200M /dev/urandom

sha256sum >/dev/null\n\nNow time it:\n\n time bash -c ‘head -c 200M /dev/urandom

sha256sum >/dev/null‘\n\nTypical pattern:\n\n- user is high\n- sys is modest\n- real is close to user+sys on an otherwise idle machine\n\n### An I/O-heavy example\n\nHere’s a “kernel does work” style workload:\n\n # Create a 2 GiB file of zeros quickly (may still hit storage constraints)\n /usr/bin/time -p dd if=/dev/zero of=/tmp/zero2g.bin bs=16M count=128 status=none\n\n # Force a read back (page cache may affect this unless you manage caches)\n /usr/bin/time -p dd if=/tmp/zero2g.bin of=/dev/null bs=16M status=none\n\nIn I/O scenarios, you’ll commonly see:\n\n- real much larger than user+sys (waiting on storage)\n- sys non-trivial (filesystem and block I/O paths)\n\n### Why user + sys can exceed real\n\nThis confuses people the first time they see it, and it’s totally normal.\n\nIf your workload uses multiple CPU cores (threads or child processes), CPU time accumulates across cores. Example: a build that uses 8 cores for 10 seconds can report ~80 seconds of CPU time while real stays around 10 seconds.\n\nThat’s not “wrong”—it’s telling you how much total CPU was consumed.\n\nA quick rule I use when reading results:\n\n- real answers “how long did I wait?” (latency)\n- user+sys answers “how much CPU did this cost?” (throughput, contention, energy, and money)\n\n### A ratio trick I use for classification\n\nIf I’m in a hurry, I compute one mental ratio: (user+sys)/real.\n\n- Close to 0.0–0.2: mostly waiting (I/O, network, lock contention, sleeps, backpressure).\n- Around 0.8–1.2 on a mostly idle system: roughly single-core CPU-bound (or a mix).\n- Larger than 1.0 by a lot: multi-core CPU usage (threads/processes) or heavy parallelism.\n\nIt’s not a perfect diagnostic (a task can be both I/O bound and multi-threaded), but it quickly tells me which tool to reach for next.\n\n## Shell-Specific Knobs: TIMEFORMAT (bash) and TIMEFMT (zsh)\n\nIf you’re using the shell keyword time, the shell often gives you its own formatting controls. This is one of the most practical upgrades because it turns timing into structured logs without requiring GNU time.\n\n### bash: TIMEFORMAT\n\nIn bash, you can set TIMEFORMAT to control how the keyword prints. A simple, parseable format might be:\n\n TIMEFORMAT=‘real=%R user=%U sys=%S\n‘\n time sleep 1\n\nNow you get a single line you can grep. If I’m doing multiple runs, I’ll include a label:\n\n TIMEFORMAT=‘label=mytest real=%R user=%U sys=%S\n‘\n time bash -c ‘head -c 200M /dev/urandom

sha256sum >/dev/null‘\n\nImportant detail: TIMEFORMAT only affects the keyword in that shell. It doesn’t affect /usr/bin/time.\n\n### zsh: TIMEFMT\n\nIn zsh, there’s a similar variable TIMEFMT. You can set it to include timing fields and even additional information. The exact escape sequences differ from bash, so I treat it as a shell-specific convenience rather than something I’d put in a portable script.\n\nMy approach is simple:\n\n- For portable scripts and CI logs: use /usr/bin/time -f ...\n- For interactive work in my own shell: use TIMEFORMAT/TIMEFMT for quick, consistent output\n\n## Practical Recipes You’ll Reuse\n\nThis is the section I wish I had as a cheat sheet years ago.\n\n### 1) Time a single command (basic)\n\n time rg -n "TODO" .\n\nThis is my fastest sanity check when a command feels slow.\n\n### 2) Time a network download (and interpret it correctly)\n\n /usr/bin/time -p curl -L -o /tmp/archive.tgz https://example.com/archive.tgz\n\nIf real is high but user and sys stay low, the bottleneck is usually network latency/bandwidth or remote server response—not your CPU.\n\nIf I want to separate “download time” from “decompression time,” I time them separately. For example:\n\n /usr/bin/time -p curl -L -o /tmp/archive.tgz https://example.com/archive.tgz\n /usr/bin/time -p tar -xzf /tmp/archive.tgz -C /tmp/extracted\n\nThat avoids the common mistake of attributing slow decompression to the network.\n\n### 3) Time a shell script\n\n /usr/bin/time -p ./daily-report.sh\n\nIf the script internally runs many commands, this gives you an end-to-end number. If it’s slow, I then time key sections inside the script (more on that later).\n\n### 4) Time multiple commands as one unit\n\nUse braces so you time the group:\n\n /usr/bin/time -p bash -c ‘{\n rg -n "ERROR" /var/log/syslog >/dev/null\n rg -n "WARN" /var/log/syslog >/dev/null\n rg -n "INFO" /var/log/syslog >/dev/null\n }‘\n\nI often do this when I care about “workflow time” rather than a single command.\n\nWhen I’m already in bash, I’ll also do this with the keyword:\n\n time {\n rg -n "ERROR" /var/log/syslog >/dev/null\n rg -n "WARN" /var/log/syslog >/dev/null\n }\n\nThe braces matter: without them, you’re timing only the first command.\n\n### 5) Time a pipeline (what exactly are you timing?)\n\nIf you do this:\n\n time head -c 200M /dev/urandom

sha256sum >/dev/null\n\nYou’re timing the pipeline as a whole (shell-dependent behavior). If you specifically want to time a part of the pipeline, wrap that part in a subshell:\n\n head -c 200M /dev/urandom

{ /usr/bin/time -p sha256sum >/dev/null; }\n\nThat times sha256sum only.\n\nOne more pipeline trick I use: if I want timing plus exit status of a pipeline, I rely on set -o pipefail and time a shell that runs the pipeline. It keeps the logic simple and avoids subtlety around which process time is attached to.\n\n### 6) Time make builds (and understand the output)\n\n /usr/bin/time -p make -j"$(nproc)"\n\nIf user+sys dwarfs real, the build is parallel and CPU-heavy. If real is large with low CPU time, you may be waiting on disk (many small files) or network (remote cache, dependency fetch).\n\nWhen I’m trying to make builds more repeatable, I also time “clean builds” vs “incremental builds” separately. Those are different workloads with different bottlenecks.\n\n### 7) Time a command and still get its exit code\n\nThis matters in CI scripts.\n\n set -euo pipefail\n\n /usr/bin/time -p bash -c ‘python3 -m compileall -q src‘\n\nGNU time itself usually exits with the same status as the timed command, but I avoid cleverness in production scripts. Wrapping in bash -c also keeps the boundary clear.\n\nIf you need to both capture the timing output and preserve the command’s stderr for logs, consider directing only time’s stderr to a separate FD (see the logging section below).\n\n### 8) Time “startup” vs “steady state”\n\nA lot of modern workloads are dominated by startup costs: interpreter startup, imports, JIT warmup, reading config, establishing TLS connections. If you only time one run, you won’t know whether the slowness is “cold start” or “steady state.”\n\nA pattern I use is to explicitly time both:\n\n # Cold start\n /usr/bin/time -p python3 -c ‘import yourmodule; yourmodule.main()‘\n\n # Warm start (same process)\n python3 – <<'PY'\n import time\n import yourmodule\n\n start = time.time()\n yourmodule.main()\n print(‘run1=‘, time.time() – start)\n\n start = time.time()\n yourmodule.main()\n print(‘run2=‘, time.time() – start)\n PY\n\nThis isn’t about replacing time. It’s about separating two performance questions: “How long until I can do work?” and “How fast is the work once running?”\n\n### 9) Time file syncs and interpret sys time\n\nTools like rsync can show surprising sys time when dealing with lots of small files, metadata operations, or checksum calculation. I time it like this:\n\n /usr/bin/time -p rsync -a –delete ./src/ user@host:/srv/app/src/\n\nIf real is high and sys is high, I suspect filesystem overhead, many stats, and small I/O—then I’ll look at options like batching, excluding unnecessary files, or changing the sync strategy.\n\n## Formatting, Logging, and Making Timing Repeatable\n\nOnce I’m done eyeballing numbers, I usually want two upgrades:\n\n1) Consistent format so I can parse results.\n2) Repeatability so noise doesn’t fool me.\n\n### POSIX format with -p\n\nIf you want a simple, portable format:\n\n /usr/bin/time -p sleep 1\n\nOutput is typically:\n\n- real 1.00\n- user 0.00\n- sys 0.00\n\nI like -p when I’m pasting results into notes or comparing across machines.\n\n### Redirect timing output without mixing it with command output\n\nA classic “gotcha”: time writes its summary to stderr, not stdout.\n\nSo if you do:\n\n /usr/bin/time -p rg -n "panic" app.log >matches.txt\n\n- matches.txt gets the matches\n- timing still prints to your terminal (stderr)\n\nTo capture the timing summary:\n\n { /usr/bin/time -p rg -n "panic" app.log >matches.txt; } 2>timing.txt\n\nNow:\n\n- matches.txt contains stdout\n- timing.txt contains real/user/sys\n\nIf the program’s stderr is important too (warnings, errors), you can keep it by redirecting timing output to a dedicated file descriptor. One pragmatic approach is: run the command with its stderr redirected, but keep the timing on your main stderr (or vice versa). The important idea is: plan where each stream goes, because time is “just another writer to stderr.”\n\n### Write timing directly to a file with -o (GNU time)\n\nGNU time has -o for output files:\n\n /usr/bin/time -p -o timing.log rg -n "panic" app.log >matches.txt\n\nI often add -a to append:\n\n /usr/bin/time -p -a -o timing.log rg -n "panic" app.log >matches.txt\n\nThis is convenient in batch runs.\n\n### Custom format strings with -f (GNU time)\n\nWhen I want structured logs, I use a custom format. Example:\n\n /usr/bin/time -f "cmd=rg real=%e user=%U sys=%S maxrsskb=%M exit=%x" \n rg -n "panic" app.log >/dev/null\n\nNotes:\n\n- %e elapsed (real) seconds\n- %U user seconds\n- %S sys seconds\n- %M max resident set size in KB (handy memory signal)\n- %x exit status\n\nThat single line is easy to grep, paste into a spreadsheet, or parse with a script.\n\nIf I’m investigating “why did this run slower today?”, I’ll also include context in the log line: hostname, git commit, dataset size, and maybe a timestamp. Example pattern:\n\n /usr/bin/time -f "ts=$(date -Is) host=$(hostname) real=%e user=%U sys=%S maxrsskb=%M" \n ./your-command –with-flags >/dev/null\n\nIt’s not fancy, but it turns timing into an audit trail.\n\n### The underused option: -v for a richer report\n\nWhen I need more than real/user/sys but I still don’t want a full profiling session, I use GNU time’s verbose mode:\n\n /usr/bin/time -v yourcommandhere\n\nThis can include details like maximum resident set size, page faults, context switches, and more. I treat it as a quick “resource usage snapshot.” It often reveals surprises like:\n\n- the command is memory-hungry (large maxrss)\n- it’s causing lots of context switches (often lock contention or too many short-lived processes)\n- it’s faulting pages unexpectedly (cold memory, heavy mmap usage, or poor locality)\n\n### Timing inside scripts (section timing)\n\nIf you’re writing a shell script and you want timing per step, you can combine date for coarse wall time with time for CPU accounting.\n\nExample bash script pattern:\n\n #!/usr/bin/env bash\n set -euo pipefail\n\n log() { printf ‘%s %s\n‘ "$(date -Is)" "$"; }\n\n timed() {\n local label="$1"; shift\n log "start label=${label}"\n /usr/bin/time -f "label=${label} real=%e user=%U sys=%S maxrsskb=%M" "$@"\n log "end label=${label}"\n }\n\n timed "compile" python3 -m compileall -q src\n timed "bundle" node ./scripts/build.mjs\n timed "tests" pytest -q\n\nThis gives you searchable timing lines per stage without inventing a full benchmark harness.\n\n## Common Traps and How I Avoid Them\n\nTiming is easy to do badly. Here are the mistakes I see most often, plus the fixes I actually use.\n\n### Trap 1: Measuring a cold cache and calling it “the truth”\n\nFirst run often includes:\n\n- disk reads that later come from page cache\n- DNS/TLS warmup\n- JIT warmup (Node, JVM)\n- Python import caches\n\nWhat I do instead:\n\n- Run once to warm up\n- Then run 5–10 times and look at the range\n\nA simple manual loop:\n\n for i in 1 2 3 4 5; do\n /usr/bin/time -p rg -n "customerid" big.log >/dev/null\n done\n\nIf the first run is an outlier, that’s useful information—it tells you cold-start cost is real.\n\n### Trap 2: Ignoring stdout/stderr cost\n\nPrinting is slow. If you time something that writes thousands of lines to your terminal, you may be benchmarking your terminal emulator.\n\nI usually redirect output when I care about compute speed:\n\n /usr/bin/time -p rg -n "customerid" big.log >/dev/null\n\nOr redirect to a file if the program behaves differently when stdout is not a TTY.\n\n### Trap 3: Timing only one side of a pipeline\n\nDepending on your shell, time cmd1

cmd2 can time cmd1, cmd2, or the whole pipeline. If it matters, I make it explicit:\n\n # Time the entire pipeline\n /usr/bin/time -p bash -c ‘cmd1

cmd2‘\n\n # Time only cmd2\n cmd1

{ /usr/bin/time -p cmd2; }\n\n### Trap 4: Forgetting that parallelism changes the meaning of user and sys\n\nIf you see:\n\n- real 5.0\n- user 35.0\n- sys 4.0\n\nThat often means “this used many CPU cores.” That’s not a measurement bug.\n\nWhen you’re comparing two implementations, compare:\n\n- real if you care about user-visible latency\n- user+sys if you care about CPU cost (cloud bills, battery, contention)\n\n### Trap 5: Comparing results across noisy system states\n\nBackground jobs, thermal throttling, and contention can easily move timings by 10–30% on developer laptops.\n\nMy practical fixes:\n\n- Close noisy apps (browsers running heavy pages matter)\n- Pin CPU governor if you control the environment\n- Run on an otherwise idle machine or a CI runner\n- Take the median of multiple runs\n\n### Trap 6: Assuming time includes everything you care about\n\nIf your command triggers work elsewhere (a remote build, a database doing heavy queries, a container pulling layers), time still measures the client command’s perspective. That may be exactly what you want, but don’t confuse it with server-side resource usage.\n\nWhen I need the “why,” I pair time with the next tool:\n\n- high sys: check strace -c or perf\n- high real and low CPU: check iostat, iftop, service logs\n- inconsistent runs: check caching layers and warmups\n\n### Trap 7: Locale and formatting surprises\n\nThis one is subtle: on some systems, decimal separators and number formatting can change based on locale. That can make timing output annoying to parse. If I’m producing machine-readable logs, I’ll often run commands with a stable locale, e.g. LCALL=C, and stick to a single-line format (-f) that I control.\n\n### Trap 8: Timing inside containers and cgroups\n\nIn containerized environments, CPU quotas and throttling can distort what “CPU-bound” feels like. You may see:\n\n- higher real than expected\n- user that grows slowly because you’re being throttled\n\nThis doesn’t make time wrong—it makes it honest about the environment you’re actually running in. When I’m benchmarking container workloads, I include the container’s CPU and memory limits in my notes, because the same code can behave very differently under different quotas.\n\n## Modern Workflow in 2026: time + Benchmark Harnesses\n\nI still start with time because it’s everywhere and it’s instant. But when I need repeatable comparisons (especially for small changes), I switch to a benchmark harness.\n\nHere’s the way I choose tools today:\n\n

Job

Classic tool

What I reach for now

Why

\n

\n

Quick classification (CPU vs wait)

time

time

Zero setup, always available

\n

Compare command variants reliably

ad-hoc loops

hyperfine

warmups, stats, outliers

\n

Find hot CPU stacks

none

perf + flamegraphs

answers “where in code?”

\n

Find syscall/I/O overhead

time only

strace -c or eBPF tools

answers “what kernel work?”

\n\n### A repeatable micro-benchmark with hyperfine\n\nIf you have it installed:\n\n hyperfine –warmup 3 \\n ‘rg -n "customerid" big.log >/dev/null‘ \\n ‘rg -n –fixed-strings "customerid" big.log >/dev/null‘\n\nI’m not replacing time here—I’m upgrading the measurement when the decision is subtle.\n\n### If you don’t have a harness: make the workload bigger\n\nI’ll be honest: most “performance debates” I see are people arguing over 20–50ms differences measured once. If you don’t want to install a harness, you can often get cleaner data by scaling the work up so that it runs for a few seconds.\n\nExamples:\n\n- Instead of grepping one file, grep a directory or multiple files.\n- Instead of hashing 50MB, hash 500MB.\n- Instead of one request, do 50 requests in a loop (while being mindful of rate limits).\n\nThe goal isn’t to create an unrealistic benchmark. The goal is to move the signal above the noise floor so time gives you stable numbers.\n\n### Pairing time with a “why” tool\n\nA pattern I use constantly:\n\n1) Run time to see what kind* of slow it is.\n2) Pick the next tool based on the shape.\n\nExample decision tree:\n\n- If user dominates: run perf record / perf report.\n- If sys dominates: run strace -c first; if it’s still unclear, use perf with kernel stacks.\n- If real dominates with low CPU: check I/O (iostat, pidstat -d) or network (ss, dig, curl -v).\n\n### AI-assisted workflow (without fooling yourself)\n\nI do use AI tools to speed up the boring parts—parsing logs, generating a quick script to run 20 trials, summarizing distributions—but I don’t ask them to “guess” performance conclusions from a single time run.\n\nWhat works well:\n\n- Ask for a small harness that runs N trials and records /usr/bin/time -f ... into CSV.\n- Ask for help interpreting multiple runs (median, p95) when you already collected data.\n- Ask for hypotheses (“high sys time often correlates with…”) and then validate with strace/perf.\n\nThe rule I follow: measure first, explain second.\n\n## When I Use time (and When I Don’t)\n\nBecause time is so easy, it’s tempting to use it for everything. I don’t.\n\nI like time when:\n\n- I need to classify a slow command quickly (CPU vs I/O vs waiting).\n- I’m checking whether a change is “obviously better” (seconds, not milliseconds).\n- I need a lightweight metric in a script or CI job.\n\nI avoid relying on time alone when:\n\n- The task completes too quickly to measure reliably (sub-100ms).\n- I need attribution (“which function?”, “which query?”, “which syscall?”).\n- The workload is highly variable (network, remote services) and I don’t control conditions.\n\nIn those cases I still start with time to get a baseline, but I don’t stop there.\n\n## Next Steps\n\nIf you want this to stick, pick one real command you run weekly—builds, backups, log searches, data imports—and time it three ways: once “as is,” once with stdout redirected to /dev/null, and once after a warmup run. Write down what happened to real, user, and sys. That single exercise will teach you more than memorizing flags.\n\nAfter that, I recommend building a tiny timing habit into your scripts: wrap major stages with /usr/bin/time -f "label=... real=%e user=%U sys=%S maxrsskb=%M" and append to a logfile. When a deploy or dataset change makes things slower, you’ll have hard numbers instead of vibes.\n\nFinally, treat time as a classifier. When the numbers point at CPU, go straight to perf and flamegraphs. When they point at kernel overhead, start with strace -c. When they point at waiting, check I/O and network tools—and don’t forget to consider caching and cold-start effects.\n\nIf there’s one takeaway I’d keep on a sticky note, it’s this: real tells you what the user feels, user+sys tells you what the machine paid. Once you know which one you’re optimizing, time becomes a surprisingly sharp instrument.

Scroll to Top