A few years ago, I was debugging a flaky CI job that “randomly” failed. The job ran a health-check loop and printed a pile of progress logs. When the service was slow, the logs grew huge; when the service was down, the loop spammed errors. Someone had “fixed” it by redirecting everything to /dev/null—and accidentally hid the one line that would have told us the real failure cause. A week later, another teammate tried to create a fixed-size file for a test database and used /dev/null again… which produced an empty file and a confusing runtime crash.\n\nThose two incidents share the same root: /dev/null and /dev/zero are both special device nodes, but they behave very differently when you read from them. If you write shell scripts that redirect output, pre-create files, reserve space, or generate predictable bytes for tests, you need to feel that difference in your fingers.\n\nI’m going to show you what each device does, how the kernel treats reads vs writes, the redirection patterns I trust, and the common foot-guns I’ve seen in production.\n\n## Device nodes: what you’re actually talking to\nOn Unix-like systems, “device files” in /dev are not regular files. They’re entry points the kernel exposes so user-space programs can talk to drivers using normal file I/O calls (open, read, write, close). The magic is that the semantics come from the device driver, not from a filesystem storing bytes.\n\nMost of the time you can treat /dev/null and /dev/zero like ordinary files in shell scripts because they behave consistently across Linux distros, containers, and VM images. Still, it’s worth remembering: these are typically character devices (not block devices), and the kernel answers your reads/writes directly.\n\nIf you want to confirm what you’re looking at on your own machine:\n\n ls -l /dev/null /dev/zero\n stat /dev/null /dev/zero\n\nYou’ll see they’re owned by root, readable/writable by everyone, and have stable major/minor numbers on Linux.\n\nHere’s the core comparison I keep in my head:\n\n
/dev/null
/dev/zero \n
—
\n
/dev/null
/dev/zero \n
major 1, minor 3
1, minor 5 \n
root:root
root:root \n
typically 666 (rw-rw-rw-)
666 (rw-rw-rw-) \n
discards all bytes
\n
returns EOF immediately
0x00) bytes \n\nThat last row—read behavior—is the difference that matters for shell scripting.\n\n### Reads vs writes: the “two directions” mental model\nWhen I review shell scripts, I notice people talk about these devices as if they have one job (“null discards stuff,” “zero makes zeros”). What helps more is to split the concept in two: what happens when a program reads from it, and what happens when a program writes to it.\n\n- /dev/null\n – Read: immediate EOF (like an empty file forever)\n – Write: accepts bytes and discards them\n- /dev/zero\n – Read: returns 0x00 forever (until the reader stops)\n – Write: accepts bytes and discards them\n\nThat alone explains 90% of real-world behavior. The remaining 10% is about shell redirection and file descriptors, which I’ll get to.\n\n### A quick portability note (Linux, macOS, containers)\nOn Linux, /dev/null and /dev/zero are provided by the kernel and behave the same across distros. On macOS and BSDs, they exist and behave the same way for shell scripting purposes too. In minimal containers, they’re still usually present because they’re fundamental, but you can run into weird images where /dev is incomplete if it’s heavily sandboxed or misconfigured.\n\nIf you’re writing scripts meant to run in odd environments, it’s reasonable to fail early with a clear message:\n\n [ -c /dev/null ]
/dev/null as “the sink.” Anything written there disappears. Reading from it produces no data and immediately hits end-of-file.\n\n### The most common shell pattern: silence a command\nYou silence standard output by redirecting file descriptor 1:\n\n command >/dev/null\n\nYou silence standard error (file descriptor 2) like this:\n\n command 2>/dev/null\n\nYou silence both like this:\n\n command >/dev/null 2>/dev/null\n\nOr the shorter (and easy-to-mess-up) form:\n\n command >/dev/null 2>&1\n\nI’ll explain when that short form is safe in a later section.\n\n### Real scripts: making logs quiet without hiding failures\nIn my own scripts, I almost never silence output by default. I silence output only when:\n\n- I’m running a check in a loop and success output is noisy.\n- I’ve already captured the error and plan to print a better one.\n- I’m probing for capability (feature detection) where failures are expected.\n\nExample: check whether systemctl is present, without printing “command not found”:\n\n #!/usr/bin/env bash\n set -euo pipefail\n\n if command -v systemctl >/dev/null 2>&1; then\n echo "systemctl is available"\n else\n echo "systemctl not found; falling back" >&2\n fi\n\nNotice: I still print a message to stderr when the tool isn’t there; I’m silencing only the detection command.\n\n### /dev/null as a fast, safe “discard” file\nBecause writes are discarded, /dev/null is a handy target when you need to satisfy a command that insists on writing somewhere.\n\nExample: curl download test where you only care about the HTTP status and latency:\n\n #!/usr/bin/env bash\n set -euo pipefail\n\n url="https://example.internal/health"\n\n # -sS: quiet progress, still show errors\n # -o: send response body to /dev/null\n # -w: print timing and status\n curl -sS -o /dev/null -w "status=%{httpcode} timetotal=%{timetotal}\n" "$url"\n\nThis pattern is also useful with tools like wget, gh, or anything else that wants an output file even when you’re just checking connectivity.\n\n### Clearing a file: redirecting from /dev/null\nThis pattern is popular:\n\n cat /dev/null > app.log\n\nIt works, but I don’t write it that way. It spawns cat for no reason. The shell can truncate the file itself:\n\n : > app.log\n\nOr:\n\n truncate -s 0 app.log\n\nIf you’re maintaining scripts across minimal container images, : > file is my go-to because it requires only the shell.\n\n### Using /dev/null to “close stdin” (a subtle but valuable trick)\nHere’s a pattern that doesn’t get talked about enough: redirecting stdin from /dev/null.\n\nSome commands will pause and wait for interactive input if stdin is connected to a terminal, or if they think they can read more. In automation, I like to be explicit when I don’t want a command to read from the terminal at all:\n\n somecommand </dev/null\n\nThis makes stdin behave like an empty file: any read attempt returns EOF immediately.\n\nPlaces I use this:\n\n- Preventing ssh or remote commands from consuming the rest of my script’s stdin\n- Making sure a tool can’t hang waiting for input\n- Running a command in CI that occasionally prompts (bad tool behavior, but reality happens)\n\nA concrete example: ensure ssh doesn’t steal input from the script (especially when you pipe into your script):\n\n ssh -o BatchMode=yes user@host "do-the-thing" </dev/null\n\nIf you’ve ever had a CI job mysteriously hang because a command tried to read from stdin, </dev/null is one of the simplest fixes.\n\n### What not to do with /dev/null\nThe big mistake is treating /dev/null as a “source” of bytes.\n\nThis will create an empty file, not a file full of zeros:\n\n dd if=/dev/null of=empty.bin bs=1M count=10\n\ndd reads EOF immediately and stops. That’s correct behavior for /dev/null reads.\n\nAnother subtle mistake: assuming cat /dev/null “does something.” It doesn’t; it prints nothing and exits successfully. If you see it in a script, it’s almost always accidental cargo culting.\n\n## /dev/zero: an endless stream of NUL bytes\n/dev/zero behaves like a generator. Reads never hit EOF (until the reading program stops), and every byte is 0x00. Writes are discarded, just like /dev/null.\n\n### Why NUL bytes matter in practice\nA “zero byte” is not the character ‘0‘ (ASCII 0x30). It’s a literal 0x00. In many contexts it’s treated as “empty” or “unset.” For binary formats, NUL bytes are common padding.\n\nIf you peek at a small chunk:\n\n head -c 16 /dev/zero od -An -t x1\n\nYou’ll see sixteen 00 bytes.\n\nA practical implication: if you accidentally feed NUL bytes into a text tool that isn’t binary-safe, it might behave strangely. Some tools will print warnings like “binary file matches,” and others will stop early depending on implementation. When I’m doing “zero input” tests, I use tools that are known to handle raw bytes (dd, head -c, sha256sum, gzip, etc.).\n\n### The classic use: create a fixed-size zero-filled file\nThe canonical dd recipe looks like this:\n\n dd if=/dev/zero of=zero-filled.bin bs=1M count=64\n\nThat produces a 64 MiB file filled with zeros.\n\nIn real projects I often want something more explicit and safer:\n\n #!/usr/bin/env bash\n set -euo pipefail\n\n out="fixture-64MiB.bin"\n sizemib=64\n\n # status=none keeps CI logs clean\n dd if=/dev/zero of="$out" bs=1M count="$sizemib" status=none\n\n # Verify: show size and confirm it contains zeros at the start\n ls -lh "$out"\n head -c 32 "$out"
head od line is a tiny sanity check that pays for itself when someone “optimizes” your script later.\n\n### Another common use: deterministic input for benchmarks\nWhen I benchmark a pipeline, I want predictable input so I’m measuring the pipeline, not randomness.\n\nExample: measure how fast a command can read 512 MiB:\n\n #!/usr/bin/env bash\n set -euo pipefail\n\n bytes=$((512 1024 1024))\n\n # Feed zeros into the program; discard output.\n # Replace yourcommand with something that reads stdin.\n head -c "$bytes" /dev/zero
head -c is important: it bounds the stream. If you forget it, your command may run forever.\n\n### A note on /dev/zero in pipelines: always put the “limit” right next to it\nWhen I see /dev/zero on the left side of a pipe, my eyes immediately search for one of these nearby:\n\n- head -c N\n- dd count=...\n- a tool option that explicitly caps input size\n\nIf I don’t see a cap, I assume it’s a bug until proven otherwise. Infinite streams are fine in theory, but in shell scripting they turn into hung terminals, timeouts, and misleading “it works on my machine” situations.\n\n### Mapping memory (quick aside)\nYou’ll sometimes see /dev/zero referenced in C code to map a page of zero-filled memory. That’s a kernel-level pattern; in shell scripting you’re usually dealing with the file-stream behavior. I mention it only because it explains why /dev/zero exists historically: it’s a convenient, always-available source of zero bytes.\n\n## Redirection details that bite even experienced bash users\nMost bugs I review around these devices aren’t about /dev/null vs /dev/zero. They’re about file descriptor wiring.\n\nLet me slow this down and make it visceral, because once you understand it, a lot of shell behavior stops being “mystical.”\n\n### File descriptors in 30 seconds (enough to avoid common traps)\nEvery process usually starts with three file descriptors open:\n\n- 0 = stdin\n- 1 = stdout\n- 2 = stderr\n\nRedirection in the shell changes what those numbers point to for the command you’re running. Importantly: redirections are processed left-to-right. That’s why order matters.\n\n### >/dev/null 2>&1 vs 2>&1 >/dev/null\nThese two look similar. They are not.\n\n- command >/dev/null 2>&1 means:\n 1) redirect stdout to /dev/null\n 2) redirect stderr to wherever stdout currently goes (which is /dev/null)\n Result: both streams go to /dev/null.\n\n- command 2>&1 >/dev/null means:\n 1) redirect stderr to wherever stdout currently goes (your terminal, or a file)\n 2) redirect stdout to /dev/null\n Result: stdout goes to /dev/null, but stderr still goes to the original stdout target.\n\nIf you want “silence everything,” I write the long form in production scripts because it’s hard to misread during a code review:\n\n command >/dev/null 2>/dev/null\n\nYes, it’s two redirects. I’ve seen fewer incidents with it.\n\n### Avoiding accidental swallow of errors\nI recommend a simple rule: never silence stderr unless you have a specific reason.\n\nBad pattern in a deploy script:\n\n kubectl apply -f manifest.yaml >/dev/null 2>&1\n\nIf it fails, your pipeline logs are blank and you waste time.\n\nBetter pattern:\n\n kubectl apply -f manifest.yaml >/dev/null\n\nYou’ll still see the error.\n\nEven better: capture and re-emit with context:\n\n #!/usr/bin/env bash\n set -euo pipefail\n\n if ! err=$(kubectl apply -f manifest.yaml 2>&1 >/dev/null); then\n echo "deploy failed: kubectl apply" >&2\n echo "$err" >&2\n exit 1\n fi\n\nThis keeps logs readable while still preserving the failure signal.\n\nOne extra nuance: err=$(...) captures stdout from the subshell, not stderr. That’s why the redirection order inside matters: 2>&1 >/dev/null sends stderr into stdout (so it gets captured), then discards stdout afterward. If you reverse it, you’ll capture nothing.\n\n### Subshells and grouping: keep redirections local\nIf you have a block of noisy checks, group them so you don’t silence unrelated output:\n\n {\n commanda\n commandb\n commandc\n } >/dev/null\n\nNow only stdout from the block is silenced. If you also want to silence stderr, do it intentionally.\n\nThis grouping trick is also useful when you want to redirect output of multiple commands into a file without individually tagging each one.\n\n### Process substitution and “why is my redirect not working?”\nIn Bash and Zsh, you’ll sometimes see process substitution like this:\n\n diff <(command1) <(command2)\n\nThis is a powerful pattern, but it can confuse redirection expectations. If command1 writes errors to stderr, those won’t magically go into the <(...) stream. If you want them inside, you must redirect inside the substitution:\n\n diff &1) &1)\n\nIf you don’t want noisy stderr, redirect it to /dev/null inside the substitution:\n\n diff /dev/null) /dev/null)\n\nThis is one of those places where being explicit saves you from “works on my machine” weirdness.\n\n## File creation patterns: zero-filled vs sparse vs “just reserve space”\nA lot of people reach for /dev/zero when they really want one of three different results:\n\n1) a file that reads back as zeros everywhere (fully allocated, written)\n2) a sparse file that appears to have size but doesn’t consume blocks yet\n3) disk space reserved/allocated without writing all those bytes\n\nThese differ in speed, disk usage, and behavior under failure.\n\n### dd if=/dev/zero: real zeros written\ndd if=/dev/zero writes bytes through the filesystem. That means:\n\n- It can be slow for very large files.\n- It does a lot of I/O.\n- It actually writes zeros, so reading the file later returns zeros even if the filesystem doesn’t do clever sparsing.\n\nThis is the right choice when you need deterministic content and you don’t want holes.\n\nA practical dd tip: choose a block size that’s not painfully small. bs=1M is a reasonable default for “make a file.” For tiny files, bs=4K or bs=64K is fine too.\n\n### truncate: sets size, often sparse\nIf you want a file that reports a certain size but you don’t care about the underlying blocks yet:\n\n truncate -s 10G bigfile.img\n\nThis is typically fast. On many filesystems this produces a sparse file: it looks big, but it doesn’t consume 10 GiB immediately. Reading unwritten regions returns zeros.\n\nIn scripts that run on unknown filesystems, I treat truncate as “size metadata,” not “write data.” If your next step assumes blocks are allocated (for example, you’re testing “disk full” behavior), truncate might not be good enough.\n\nIf you want to check whether your file is sparse, you can compare its apparent size to its actual disk usage:\n\n ls -lh bigfile.img\n du -h bigfile.img\n\nIf ls says 10G but du says something tiny, you have holes. That’s expected and often desired.\n\n### fallocate: asks the filesystem to allocate blocks\nOn Linux, fallocate can reserve blocks without writing them byte-by-byte:\n\n fallocate -l 10G bigfile.img\n\nThis is often the fastest path to “I want this much disk reserved now.” It’s not always supported the same way on every filesystem, and behavior can vary (especially around sparse behavior and copy-on-write filesystems).\n\nOn some copy-on-write filesystems, preallocation can behave differently than you expect, and in snapshot-heavy environments it can have storage implications. When storage behavior matters, I test it on the target filesystem rather than assuming.\n\n### Which one I pick in real work\nHere’s the decision I actually follow:\n\nGoal
Why
—
—
Need predictable zero bytes
dd if=/dev/zero writes actual zeros
Need a file with a given size for a test harness
truncate fast, simple, portable
Need to reserve disk space now
fallocate allocates blocks quickly
dd is more likely to be available than fallocate, but it’s slower for huge files.\n\n### “I just need a file of N bytes” (portable helper)\nWhen I’m writing a script that needs a file of an exact size and I want it to work almost everywhere, I’ll do something like this:\n\n makezerosfile() {\n # Usage: makezerosfile BYTES PATH\n local bytes="$1"\n local path="$2"\n\n # Try truncate first (fast); fall back to dd (always available)\n if command -v truncate >/dev/null 2>&1; then\n truncate -s "$bytes" "$path"\n else\n # dd count is in blocks, so we use bs=1 for exact bytes (slower but correct)\n dd if=/dev/zero of="$path" bs=1 count="$bytes" status=none\n fi\n }\n\nThis doesn’t guarantee “fully allocated blocks,” but it guarantees the file size and (when sparse) reads as zero in unwritten regions, which is what most test harnesses need.\n\n## Practical recipes I use in shell scripts\nThis is where /dev/null and /dev/zero stop being trivia and start paying rent.\n\n### Recipe 1: “quiet success, loud failure” wrapper\nI like a wrapper that keeps routine output quiet but never hides errors:\n\n #!/usr/bin/env bash\n set -euo pipefail\n\n runquietok() {\n # Runs the command; suppresses stdout; preserves stderr.\n "$@" >/dev/null\n }\n\n runquietok git fetch –prune\n runquietok docker image ls\n\nThis is a boring pattern, and boring is good for production scripts.\n\nA small improvement I sometimes add: a debug mode. When DEBUG=1, I don’t silence anything. That gives you quiet logs by default and good logs when you need them:\n\n runquietok() {\n if [ "${DEBUG:-0}" = "1" ]; then\n "$@"\n else\n "$@" >/dev/null\n fi\n }\n\n### Recipe 2: check for a process without noise\nIf I’m polling for a service in CI, I don’t want ps or pgrep output in the logs:\n\n #!/usr/bin/env bash\n set -euo pipefail\n\n name="postgres"\n\n if pgrep -x "$name" >/dev/null; then\n echo "$name is running"\n else\n echo "$name is not running" >&2\n exit 1\n fi\n\n### Recipe 3: generate a fixed payload for testing uploads\nWhen testing multipart uploads or proxy limits, I often need a file of a certain size with known bytes. Zeros are fine:\n\n #!/usr/bin/env bash\n set -euo pipefail\n\n payload="upload-payload-25MiB.bin"\n\n # Create 25 MiB of zeros\n dd if=/dev/zero of="$payload" bs=1M count=25 status=none\n\n # Upload it\n curl -sS -o /dev/null -w "status=%{httpcode} sizeupload=%{sizeupload}\n" \n -F "file=@$payload" \n https://example.internal/upload\n\nIf you do this kind of thing a lot, add cleanup so you don’t leave giant fixtures behind:\n\n tmp=$(mktemp)\n trap ‘rm -f "$tmp"‘ EXIT\n dd if=/dev/zero of="$tmp" bs=1M count=25 status=none\n\n### Recipe 4: bound an infinite stream\nAny time /dev/zero appears in a pipeline, I want to see a bound right next to it (head -c, dd count=, etc.).\n\nGood:\n\n head -c 4M /dev/zero gzip -c >/dev/null\n\nBad:\n\n cat /dev/zero
sha256sum >/dev/null\n\nThis measures hashing speed more than disk speed.\n\n### Recipe 6: create “dummy stdin” for commands in automation\nI mentioned your</dev/null earlier, but here’s a full example because it’s a lifesaver in real scripts. Suppose you have a command that sometimes prompts (or tries to read), and you want the script to fail fast rather than hang:\n\n #!/usr/bin/env bash\n set -euo pipefail\n\n if ! output=$(sometool –maybe-prompts &1); then\n echo "sometool failed" >&2\n echo "$output" >&2\n exit 1\n fi\n\nThat </dev/null guarantees it can’t ask you questions. If it tries, it’ll read EOF and usually fail immediately—which is what you want in CI.\n\n### Recipe 7: test that a program correctly handles “empty input”\nWhen I’m validating script behavior, I often want to ensure a tool behaves well when it gets empty stdin. /dev/null is the easiest way to simulate that:\n\n # Expect this to exit non-zero with a helpful message\n myparser </dev/null\n\nThen I’ll check that stderr contains the message I want. This is also a good way to reveal tools that hang waiting for input that never comes.\n\n## Common mistakes and how I prevent them\nI’ve reviewed enough shell scripts to have a short list of recurring failures. Here’s what I watch for and how I avoid them.\n\n### Mistake 1: using /dev/null when you meant /dev/zero\nSymptoms:\n- a file is created but is empty\n- a test fixture is “size 0” and later code crashes\n\nPrevent it:\n- If you need bytes, read from /dev/zero.\n- If you need a sink, write to /dev/null.\n\nI also add a sanity check right after file creation:\n\n [ -s "$file" ]
{ echo "expected non-empty file: $file" >&2; exit 1; }\n\nAnd if size matters:\n\n expected=$((25 1024 1024))\n actual=$(wc -c <"$file")\n [ "$actual" -eq "$expected" ]
{ echo "size mismatch" >&2; exit 1; }\n\n### Mistake 2: forgetting to bound /dev/zero\nSymptoms:\n- commands run forever\n- CI job times out\n\nPrevent it:\n- Never use
/dev/zero in a pipeline without an explicit size limit.\n- Prefer head -c N because it reads exactly N bytes.\n\nIf you prefer dd, I recommend status=none in CI to avoid megabytes of progress output:\n\n dd if=/dev/zero bs=1M count=128 status=none /dev/zero as a stand-in for “random”\nZeros are deterministic, which is often exactly what you want for repeatable tests—but sometimes you want variability (for compression tests, entropy-sensitive behavior, or fuzzing).\n\nIf you pipe zeros into gzip, you’ll get excellent compression. If you pipe real random bytes, compression will be poor. That difference can completely change your benchmark.\n\nWhen the goal is “test with incompressible-ish data,” I use a random source (with care) rather than /dev/zero. I’m not going deep into randomness here, but the headline is: /dev/zero is for deterministic padding and fixed content, not for simulating real-world data.\n\n### Mistake 6: confusing “empty” with “zero”\nThis shows up in scripts that “initialize” files. An empty file and a zero-filled file are not interchangeable.\n\n- Empty file: length is 0; reading returns EOF immediately\n- Zero-filled file: length is N; reading returns N bytes (all 0x00), then EOF\n\nPrograms that memory-map or read structured formats often behave differently depending on which one you created. If your program expects a 64MiB file and you give it empty, it might crash, throw “unexpected EOF,” or misbehave.\n\n## Deeper shell patterns: redirection as an API boundary\nOnce you’re beyond toy scripts, redirections aren’t just “mute output”—they become a clean interface boundary between “noisy internals” and “stable outputs.”\n\n### Use stdout for data, stderr for narration\nIf you take one discipline from this whole topic, make it this:\n\n- stdout is for machine-readable output\n- stderr is for logs, warnings, progress, and human context\n\nThen /dev/null becomes a precise tool: you can silence narration without breaking data flows, or vice versa.\n\nFor example, suppose you have a function that prints the resolved IP of a hostname (data) but logs retries (narration). You can direct retries to stderr and keep stdout clean. Then callers can do:\n\n ip=$(resolvehost "$name" 2>/dev/null)\n\nand they still get the data on stdout. That’s a design win, not a hack.\n\n### Avoid global “mute everything” in libraries\nIf you’re writing shared shell functions, don’t bake in >/dev/null 2>&1 inside them unless you’re absolutely sure. It steals control from callers. Prefer to let the caller decide.\n\nIf you must support both, add a flag:\n\n runtool() {\n if [ "${QUIET:-0}" = "1" ]; then\n tool "$@" >/dev/null\n else\n tool "$@"\n fi\n }\n\nThe key: keep stderr visible by default.\n\n## Performance considerations (what matters, what doesn’t)\nPeople sometimes overthink /dev/null and /dev/zero from a performance standpoint. For most scripting tasks, the performance difference isn’t in the device—it’s in how much data you move and where you move it. Still, there are a few practical things I pay attention to.\n\n### /dev/null is fast, but your program might not be\nRedirecting output to /dev/null avoids terminal rendering and file I/O, but your program still has to generate the output unless it has its own quiet mode. For very chatty commands, it can be better to use the program’s built-in quiet flags so it does less work.\n\nExample: prefer a quiet option over dumping to /dev/null when the tool supports it well:\n\n- curl -sS ...\n- rsync --quiet ... (or more selective flags)\n- git often has flags that reduce output (depending on the subcommand)\n\n### dd knobs that matter in practice\nIf you use dd with /dev/zero a lot, a few options are worth knowing:\n\n- bs= controls block size. Bigger blocks usually reduce overhead.\n- count= controls how many blocks. Size = bs * count.\n- status=none keeps logs clean.\n- conv=fsync forces data flush before dd exits (useful when you need the file truly on disk before next step).\n\nExample: create a file and ensure it’s flushed (useful in flaky storage tests):\n\n dd if=/dev/zero of=test.img bs=1M count=256 status=none conv=fsync\n\nI don’t use conv=fsync by default because it can slow things down, but it’s good when correctness matters more than speed.\n\n### Sparse files can make benchmarks lie\nIf you use truncate to create a “big” file and then benchmark reading it, you might be benchmarking the filesystem’s behavior for holes rather than real disk I/O. On many systems, reading a sparse region returns zeros without reading from disk. That’s great, but it means your benchmark is not measuring what you think.\n\nIf you want “real I/O,” write real data (dd if=/dev/zero or another source) or use a tool designed for storage benchmarking.\n\n## Edge cases and gotchas in real environments\nThis is where shell scripts get surprising—not because /dev/null or /dev/zero change, but because other tools change around them.\n\n### When a command writes important output to stdout (and you silence it)\nSome tools use stdout for both “useful results” and “progress,” which is annoying but common. If you redirect stdout to /dev/null, you might throw away the one line you actually needed.\n\nMy mitigation: if I need data from a command, I capture it explicitly and then log my own summary. Example pattern:\n\n if ! json=$(sometool –json 2>err.log); then\n echo "sometool failed" >&2\n cat err.log >&2\n exit 1\n fi\n\nThen you can choose whether to discard err.log or keep it. The point is: don’t blindly silence streams when you don’t control what’s in them.\n\n### When a command treats NUL bytes specially\nText-processing tools (like grep, sed, awk) are not always happy with NUL bytes. If you feed /dev/zero into them, you may get surprising behavior or warnings.\n\nIf you want to stress-test a text tool, I typically generate input that is still textual (e.g., long lines of a), not raw NULs. /dev/zero is best for binary-safe pipelines.\n\n### When scripts run under set -euo pipefail\nI like set -euo pipefail, but it interacts with pipelines in ways that can surprise people. If you do:\n\n head -c 1M /dev/zero
sometool exits early (maybe it only needed a small header), head may get a broken pipe and exit non-zero. With pipefail, that can cause the whole script to fail even though the overall behavior was fine.\n\nHow I handle it:\n\n- If early-exit is expected, I explicitly allow it:\n\n head -c 1M /dev/zero some_tool >/dev/null| true\n\n- Or I use a tool that won’t complain the same way, depending on the scenario.\n\nThis is not a /dev/zero problem, but /dev/zero is a common trigger because it encourages “feed bytes into a pipeline” patterns.\n\n## Alternatives you should know (so you pick the right tool)\n/dev/null and /dev/zero are two points in a wider toolbox. Knowing nearby tools helps you avoid misusing them.\n\n### /dev/full for testing “disk full” error handling\nSome systems expose /dev/full, a special device that always fails writes with “No space left on device.” It’s incredibly useful for testing error handling paths without actually filling your disk.\n\nIf it exists on your system, you can do things like:\n\n echo "data" > /dev/full\n\nand confirm your script handles the failure. I don’t rely on it in portable scripts, but I love it in local tests.\n\n### mktemp and memory-backed storage\nIf the reason you’re using /dev/zero is “I need a file quickly,” consider where you create it. In CI, creating big files on slow disks can be painful. Some environments have a memory-backed temp directory (often /tmp). If you create temporary fixtures, put them in the right place:\n\n tmpdir=${TMPDIR:-/tmp}\n file=$(mktemp "$tmpdir/payload.XXXXXX")\n\nThen fill it with zeros if you truly need content.\n\n### printf and head for small, controlled inputs\nIf you only need a small number of bytes, /dev/zero can be overkill. Sometimes I just do:\n\n printf ‘00000000‘\n\nor:\n\n printf ‘\0\0\0\0‘\n\nBut be careful: not all shells handle \0 escape sequences the same way in echo, which is why I prefer printf. For binary content, printf is more predictable than echo.\n\n### Use the tool’s own options when available\nMany tools have built-in flags that replace the need for /dev/null. For example:\n\n- curl -o /dev/null is fine, but sometimes curl --head (or -I) better matches the intent if you truly only want headers.\n- Some programs can write logs to stderr and data to stdout via flags; prefer that to messy redirection.\n\nMy general rule: use redirection to shape the environment; use program options to shape the program. Combine both deliberately.\n\n## A “decision checklist” I actually use\nIf you’re still unsure which device belongs in your script, here’s the checklist I run mentally:\n\n1) Do I need to discard output?\n – Yes → redirect to /dev/null\n\n2) Do I need to prevent reading / make stdin empty?\n – Yes → redirect from /dev/null (</dev/null)\n\n3) Do I need a predictable stream of bytes?\n – Yes → read from /dev/zero (but cap it)\n\n4) Do I need a file with actual bytes written?\n – Yes → dd if=/dev/zero ...\n\n5) Do I only need a file with a size (content doesn’t matter yet)?\n – Yes → truncate (fast; often sparse)\n\n6) Do I need to reserve disk space now?\n – Yes → fallocate (Linux; test behavior)\n\nIf you follow that, the classic confusions disappear: /dev/null is about emptiness and discarding; /dev/zero is about producing bytes (specifically 0x00) when read.\n\n## Wrapping up\nBoth /dev/null and /dev/zero are deceptively simple: one is “nothing,” the other is “infinite zeros.” But in shell scripting, that difference isn’t trivia—it controls whether your scripts are quiet without being blind, whether your test fixtures are real or accidentally empty, and whether your pipelines terminate or hang.\n\nWhen I’m writing production scripts, I aim for three habits:\n\n- Be explicit about which stream you’re redirecting (stdout vs stderr vs stdin).\n- Use /dev/null to discard or to represent empty input, never as a byte source.\n- Use /dev/zero only when I need deterministic bytes, and always cap it so the script stays finite.\n\nIf you want, I can also expand this into a short “cheat sheet” section with copy-pastable patterns (quiet wrappers, safe file creation helpers, and common redirection combos), tuned for bash vs sh portability.



