I still remember the first time a “simple file read” bug turned into a production incident. The app worked locally, but in production it silently mangled customer names with accents, occasionally froze while writing logs, and leaked file handles under load. Nothing was “wrong” with Java—my mental model of I/O was incomplete.
If you write Java in 2026, you’re surrounded by I/O: reading config, parsing CSV exports, streaming HTTP responses, writing audit trails, piping subprocess output, and moving large files safely. The language gives you two big toolboxes: the classic stream APIs in java.io and the newer filesystem and channel APIs in java.nio / java.nio.file (often called NIO.2).
I’ll walk you through a working mental model (bytes vs characters, buffering, and closing), then show runnable examples for console input, formatted output, file read/write, directory operations, and a few “real-world” patterns (atomic writes, charset correctness, and large-file streaming). I’ll also call out mistakes I see even in senior code reviews—because I/O bugs are rarely loud; they’re subtle, slow, and expensive.
A Practical Mental Model: Streams, Bytes, Characters, Buffers
When I think about Java I/O, I picture a conveyor belt:
- A source produces data (keyboard, file, socket, memory).
- A stream/reader pulls data from the source.
- Optional wrappers add behavior (buffering, decoding bytes to characters, compression).
- A sink consumes data (console, file, socket, memory).
The first decision that saves you time is choosing the right “shape” of stream:
- Byte streams (
InputStream,OutputStream) deal in raw bytes. Use them for images, PDFs, ZIP files, and any binary protocol. - Character streams (
Reader,Writer) deal in Unicode characters. Use them for text.
The second decision is almost always buffering:
- Unbuffered I/O often does many small reads/writes, which can be slow.
- Buffered I/O batches work in memory and reduces calls into the OS.
In practice, I almost always wrap:
FileInputStream→BufferedInputStreamFileOutputStream→BufferedOutputStreamInputStream+ charset →InputStreamReader→BufferedReaderOutputStream+ charset →OutputStreamWriter→BufferedWriter
The third decision: who closes what.
- Closing the outermost wrapper closes the underlying stream.
- Prefer try-with-resources so closure happens even on exceptions.
A small but important note: text I/O needs a charset. If you don’t specify one, Java may use the platform default, and that’s where “works on my machine” encoding bugs come from.
Default Streams: System.in, System.out, System.err
Java starts every program with three standard streams already set up:
System.in(standard input): anInputStreamSystem.out(standard output): aPrintStreamSystem.err(standard error): aPrintStream
I treat them as the “pipes” between your program and the environment. They’re great for CLI tools, scripts, build steps, and quick diagnostics.
Reading raw bytes from System.in
System.in is a byte stream, so the low-level read methods return integers (0–255) or -1 for end-of-stream.
import java.io.IOException;
public class SystemInSingleByteDemo {
public static void main(String[] args) throws IOException {
System.out.print("Type one character and press Enter: ");
int value = System.in.read(); // reads a single byte
if (value == -1) {
System.err.println("No input (end of stream). ");
return;
}
System.out.println("You typed: ‘" + (char) value + "‘ (byte value=" + value + ")");
}
}
In real CLI programs, I rarely read raw bytes directly; I wrap System.in for text input.
Printing to System.out: print, println, printf
print(...)writes without a newline.println(...)writes with a newline.printf(...)writes formatted output.
public class StandardOutDemo {
public static void main(String[] args) {
System.out.print("Deploying ");
System.out.print("service=");
System.out.println("orders-api");
int port = 8080;
double latencyMs = 12.34567;
System.out.printf("Listening on port %d%n", port);
System.out.printf("P95 latency ~ %.1fms%n", latencyMs);
}
}
Tip I give teams: use printf (or a logger) when you care about stable formatting—especially in scripts that parse output.
Printing errors to System.err
System.err is separate so tooling can treat errors differently (capture stdout, but still show errors in the terminal).
public class StandardErrDemo {
public static void main(String[] args) {
String inputPath = args.length > 0 ? args[0] : null;
if (inputPath == null) {
System.err.println("Missing required argument: inputPath");
System.err.println("Example: java StandardErrDemo ./data/orders.csv");
System.exit(2);
}
System.out.println("Reading from: " + inputPath);
}
}
If you build CLI tools, this separation is one of the easiest “professional touches” you can add.
Console Input Patterns: Scanner, BufferedReader, and Console
Console input is where beginners often start—and where subtle bugs appear when input formats get more complex.
Pattern 1: Scanner (friendly, slower, easy to misuse)
Scanner is approachable for token-based input. I use it for quick demos and small CLIs. In high-throughput parsing, I avoid it.
import java.util.Scanner;
public class ScannerDemo {
public static void main(String[] args) {
try (Scanner scanner = new Scanner(System.in)) {
System.out.print("Customer ID: ");
long customerId = scanner.nextLong();
System.out.print("Monthly budget (USD): ");
double budget = scanner.nextDouble();
System.out.printf("Loaded customerId=%d, budget=%.2f%n", customerId, budget);
}
}
}
Common mistake: mixing nextInt() / nextDouble() with nextLine() and then wondering why the line is “empty”. That’s because the newline is still sitting in the buffer.
Pattern 2: BufferedReader for line-oriented input (my default for CLIs)
When input is naturally line-based (paths, commands, JSON lines), BufferedReader feels cleaner.
import java.io.BufferedReader;
import java.io.IOException;
import java.io.InputStreamReader;
import java.nio.charset.StandardCharsets;
public class BufferedReaderDemo {
public static void main(String[] args) throws IOException {
BufferedReader reader = new BufferedReader(
new InputStreamReader(System.in, StandardCharsets.UTF_8)
);
System.out.print("Enter an email address: ");
String email = reader.readLine();
if (email == null || email.isBlank()) {
System.err.println("No email provided.");
return;
}
System.out.println("Got: " + email.trim());
}
}
Notice I specified UTF-8 explicitly. That single choice prevents a lot of “character looks weird” reports.
Pattern 3: Console for secrets (passwords)
If you’re reading a password, don’t read it as a normal line if you can avoid it.
import java.io.Console;
public class ConsolePasswordDemo {
public static void main(String[] args) {
Console console = System.console();
if (console == null) {
System.err.println("No console available (running in an IDE or non-interactive environment). ");
System.exit(1);
}
char[] password = console.readPassword("Password: ");
try {
// Never print passwords; this is just a placeholder for real auth.
if (password.length < 12) {
System.err.println("Password too short.");
System.exit(2);
}
System.out.println("Password accepted.");
} finally {
// Reduce how long the secret lives in memory
java.util.Arrays.fill(password, ‘\0‘);
}
}
}
In my experience, this is an easy way to avoid accidental credential leaks in logs and terminal history.
Classic File I/O with java.io: Streams, Readers/Writers, Buffering
Even if you mostly use Files (NIO.2), it pays to understand java.io because:
- Many libraries accept/return
InputStreamandOutputStream. - Wrapping streams is how you add behaviors like buffering, compression, and hashing.
Copying a binary file with buffering
This pattern works for images, PDFs, ZIPs—anything binary.
import java.io.BufferedInputStream;
import java.io.BufferedOutputStream;
import java.io.FileInputStream;
import java.io.FileOutputStream;
import java.io.IOException;
public class BufferedFileCopy {
public static void main(String[] args) throws IOException {
if (args.length != 2) {
System.err.println("Usage: java BufferedFileCopy <source> <dest>");
System.exit(2);
}
String sourcePath = args[0];
String destPath = args[1];
// 64 KiB is a reasonable default buffer for many workloads.
byte[] buffer = new byte[64 * 1024];
try (BufferedInputStream in = new BufferedInputStream(new FileInputStream(sourcePath));
BufferedOutputStream out = new BufferedOutputStream(new FileOutputStream(destPath))) {
int bytesRead;
while ((bytesRead = in.read(buffer)) != -1) {
out.write(buffer, 0, bytesRead);
}
// BufferedOutputStream flushes on close, but an explicit flush can help when debugging.
out.flush();
}
System.out.println("Copied " + sourcePath + " -> " + destPath);
}
}
When not to use this: if you’re writing a simple app that reads/writes small files and you don’t need stream wrappers, the NIO.2 Files.copy(...) convenience method is simpler.
Reading and writing text with explicit charset
Text I/O should be explicit about encoding. UTF-8 is a safe default for modern systems.
import java.io.BufferedReader;
import java.io.BufferedWriter;
import java.io.FileInputStream;
import java.io.FileOutputStream;
import java.io.IOException;
import java.io.InputStreamReader;
import java.io.OutputStreamWriter;
import java.nio.charset.StandardCharsets;
public class TextTransformDemo {
public static void main(String[] args) throws IOException {
if (args.length != 2) {
System.err.println("Usage: java TextTransformDemo <inputTxt> <outputTxt>");
System.exit(2);
}
String inputPath = args[0];
String outputPath = args[1];
try (BufferedReader reader = new BufferedReader(
new InputStreamReader(new FileInputStream(inputPath), StandardCharsets.UTF_8));
BufferedWriter writer = new BufferedWriter(
new OutputStreamWriter(new FileOutputStream(outputPath), StandardCharsets.UTF_8))) {
String line;
while ((line = reader.readLine()) != null) {
// Example transformation: normalize whitespace and uppercase
String cleaned = line.strip().replaceAll("\\s+", " ").toUpperCase();
writer.write(cleaned);
writer.newLine();
}
}
System.out.println("Wrote transformed file to: " + outputPath);
}
}
Common mistake: using FileReader / FileWriter without thinking about encoding. Those classes use the platform default charset, which is a recipe for inconsistent behavior across machines.
Modern File I/O with NIO.2: Paths, Files, and Safe Operations
For everyday filesystem work in 2026, java.nio.file is my first stop:
Pathis a better replacement forFile.Filescontains high-level helpers.- Many methods accept
OpenOptions and work nicely with try-with-resources.
Here’s a quick “traditional vs modern” table I use when mentoring:
Traditional approach
—
BufferedReader + FileInputStream
Files.readString(path, UTF_8) BufferedWriter + FileOutputStream
Files.writeString(path, text, UTF_8) manual buffer loop
Files.copy(source, dest, ...) recursive File.listFiles()
Files.walk(path) / Files.find(...) tricky, error-prone
Files.move(..., ATOMIC_MOVE) (when supported) Reading and writing small text files
When a file is “human-sized” (config, templates, small exports), this is clean and reliable:
import java.io.IOException;
import java.nio.charset.StandardCharsets;
import java.nio.file.Files;
import java.nio.file.Path;
public class FilesReadWriteStringDemo {
public static void main(String[] args) throws IOException {
Path input = Path.of("config", "app.properties");
Path output = Path.of("build", "app.properties.copy");
String content = Files.readString(input, StandardCharsets.UTF_8);
// Example: enforce newline at end of file
if (!content.endsWith("\n")) {
content = content + "\n";
}
Files.createDirectories(output.getParent());
Files.writeString(output, content, StandardCharsets.UTF_8);
System.out.println("Wrote: " + output.toAbsolutePath());
}
}
When not to use this: if the file can be hundreds of MB or larger, read it as a stream instead of loading it all into a single String.
Streaming large files line-by-line
This approach keeps memory usage stable because it processes one line at a time.
import java.io.BufferedReader;
import java.io.IOException;
import java.nio.charset.StandardCharsets;
import java.nio.file.Files;
import java.nio.file.Path;
public class LargeLogScanDemo {
public static void main(String[] args) throws IOException {
Path logPath = args.length > 0 ? Path.of(args[0]) : Path.of("logs", "server.log");
long errorCount = 0;
try (BufferedReader reader = Files.newBufferedReader(logPath, StandardCharsets.UTF_8)) {
String line;
while ((line = reader.readLine()) != null) {
// Non-obvious logic: match a simple pattern without regex overhead
if (line.contains(" ERROR ")) {
errorCount++;
}
}
}
System.out.println("Errors found: " + errorCount);
}
}
Walking a directory tree safely
Files.walk(...) is a great replacement for hand-rolled recursion.
import java.io.IOException;
import java.nio.file.Files;
import java.nio.file.Path;
import java.util.Comparator;
public class DirectoryWalkDemo {
public static void main(String[] args) throws IOException {
Path root = args.length > 0 ? Path.of(args[0]) : Path.of("data");
// Example: print the 10 largest files
Files.walk(root)
.filter(Files::isRegularFile)
.map(path -> {
try {
return new FileSize(path, Files.size(path));
} catch (IOException e) {
return new FileSize(path, -1);
}
})
.filter(entry -> entry.sizeBytes >= 0)
.sorted(Comparator.comparingLong(FileSize::sizeBytes).reversed())
.limit(10)
.forEach(entry -> System.out.println(entry.sizeBytes + " bytes\t" + entry.path));
}
private record FileSize(Path path, long sizeBytes) {
long sizeBytes() { return sizeBytes; }
}
}
Note: Files.walk(...) returns a stream that holds resources; the simplest safe pattern is wrapping it in try-with-resources. I skipped that here by consuming immediately, but in longer pipelines I do:
try (var paths = Files.walk(root)) { ... }
Atomic writes: the “don’t corrupt production” pattern
A pattern I recommend for important files (state snapshots, generated config, reports):
- Write to a temp file in the same directory.
- Flush and close.
- Move into place (atomic move when supported).
import java.io.IOException;
import java.nio.charset.StandardCharsets;
import java.nio.file.Files;
import java.nio.file.Path;
import java.nio.file.StandardCopyOption;
public class AtomicWriteDemo {
public static void main(String[] args) throws IOException {
Path target = Path.of("build", "daily-report.txt");
Files.createDirectories(target.getParent());
Path temp = Files.createTempFile(target.getParent(), "daily-report-", ".tmp");
try {
String report = "orders=1249\nrefunds=12\n";
Files.writeString(temp, report, StandardCharsets.UTF_8);
// Replace existing file; ATOMIC_MOVE is best-effort (depends on filesystem support)
Files.move(
temp,
target,
StandardCopyOption.REPLACE_EXISTING,
StandardCopyOption.ATOMIC_MOVE
);
} finally {
// If move failed, try to clean up the temp file
Files.deleteIfExists(temp);
}
System.out.println("Wrote report safely to: " + target.toAbsolutePath());
}
}
If you’ve ever seen a half-written JSON file crash a service on startup, this pattern is the fix.
Streams as Building Blocks: Compression, Hashing, and Piping
One reason I still teach java.io streams is composability: wrappers let you build exactly what you need.
A common real-world example: write compressed data and compute a hash as you write.
import java.io.BufferedOutputStream;
import java.io.FileOutputStream;
import java.io.IOException;
import java.security.MessageDigest;
import java.security.NoSuchAlgorithmException;
import java.util.HexFormat;
import java.util.zip.GZIPOutputStream;
public class GzipAndHashDemo {
public static void main(String[] args) throws IOException, NoSuchAlgorithmException {
String outputPath = args.length > 0 ? args[0] : "build/orders.json.gz";
MessageDigest sha256 = MessageDigest.getInstance("SHA-256");
// Wrapper that updates the digest for every write
var digestStream = new java.security.DigestOutputStream(
new BufferedOutputStream(new FileOutputStream(outputPath)),
sha256
);
try (GZIPOutputStream gzip = new GZIPOutputStream(digestStream)) {
String jsonLines =
"{\"orderId\":\"A1001\",\"total\":39.95}\n" +
"{\"orderId\":\"A1002\",\"total\":12.50}\n";
// GZIPOutputStream is byte-based; encode explicitly
gzip.write(jsonLines.getBytes(java.nio.charset.StandardCharsets.UTF_8));
}
String hashHex = HexFormat.of().formatHex(sha256.digest());
System.out.println("Wrote: " + outputPath);
System.out.println("SHA-256: " + hashHex);
}
}
This is the style of I/O I like: small wrappers with single responsibilities you can stack.
Performance and Correctness: What I Watch For in Code Reviews
I/O is a magnet for bugs because it crosses boundaries: OS, filesystem, terminals, networks, encodings, and user input. Here’s what I actively look for.
1) Not specifying a charset for text
If you read or write text without specifying a charset, you’ve created a portability bug.
- Prefer
StandardCharsets.UTF_8. - For inputs that may be unknown, validate or detect encoding at the boundary (and document it).
2) Forgetting to close streams (or relying on finalizers)
Use try-with-resources. Always.
Bad pattern:
- open a stream, do work, forget to close on exception
Good pattern:
try (InputStream in = ...) { ... }
3) Reading entire files into memory by accident
If the file can grow without bound (logs, exports, uploads), stream it.
- Prefer
Files.newBufferedReader+readLine()for line processing. - Prefer
InputStream+ buffer loop for binary.
4) Writing without flushing when it matters
If you’re writing to:
- a network socket,
- a process stdin,
- or a long-lived log stream,
then flushing strategy matters. Closing flushes, but long-running apps may not close for a while.
I don’t flush after every write (that can be slow). I flush:
- at logical boundaries (end of message)
- on shutdown hooks
- when interacting with an external process that expects prompt output
5) Confusing System.out with logging
For production services, I usually write logs through a logging framework rather than System.out.println, because structured logs and log levels matter. For CLIs and scripts, System.out/System.err are perfect.
6) Treating exceptions as “impossible”
Filesystem operations fail for reasons you can’t control:
- permissions
- full disk
- antivirus locks
- concurrent writers
- missing directories
I recommend handling errors with enough context to debug:
- include the path
- include the operation
- keep the original exception
When to Use Which API (and When Not To)
I make this decision quickly by asking two questions: “Is it text?” and “Is it small?”
- Small text config/template/report:
Files.readString/Files.writeStringwith UTF-8. - Large text (logs, CSV exports):
Files.newBufferedReaderand process line-by-line. - Binary data:
InputStream/OutputStreamwith buffering. - Need wrappers (gzip, digest, encryption): build a stream pipeline in
java.io. - Filesystem operations (walk, move, permissions): NIO.2
Path+Files.
What I avoid:
FileReader/FileWriterfor anything that crosses machines.- loading unknown-size files into a single
String. - writing important files directly without an atomic write strategy.
A modern note: if you’re doing lots of blocking I/O concurrently (many sockets, many file reads), virtual threads (available in current LTS JDKs) can simplify the code by letting you keep a straightforward blocking style without drowning in callbacks. I still keep the I/O primitives the same; I just run them in a concurrency model that’s easier to reason about.
Key Takeaways and Next Steps
When I/O code goes wrong, it rarely fails loudly. It slows down, corrupts a file once a week, or breaks only in one environment. The fix is usually boring—and that’s a compliment. Boring I/O is stable I/O.
Here’s what I’d do next if you want your Java I/O to feel solid:
- Standardize on UTF-8 for text at boundaries, and make it explicit in constructors and
Files.*calls. - Make try-with-resources your default muscle memory; treat “forgot to close” as a bug, not a style issue.
- Use buffering unless you have a clear reason not to. Your future self will thank you when a job that used to take minutes drops to seconds.
- For important outputs, write to a temp file and move into place. That one pattern prevents a lot of data corruption incidents.
- Choose the simplest API that matches the shape of your data:
Files.readStringfor small text, streaming readers for large text, byte streams for binary.
If you want, I can tailor these patterns to your exact scenario (CLI tool, backend service, batch job, or file-processing pipeline) and recommend a small “I/O checklist” your team can apply during reviews.


