Pretty Print JSON in Python (Practical Guide for 2026)

I still remember the first time I logged a JSON payload from a production service and stared at a single line of text that stretched past my terminal’s width. It technically worked, but it was useless for debugging. If you’ve ever tried to review API responses, config files, or event payloads without formatting, you’ve felt the same friction. That’s why pretty printing matters: it turns dense data into readable structure so you can reason about it quickly and avoid mistakes.

In this post, I’ll show you how I pretty print JSON in Python for day‑to‑day tasks: debugging, logging, diffs, and data pipelines. I’ll cover the built‑in json module, file workflows, formatting options, and modern patterns I use in 2026‑style Python projects. I’ll also point out common mistakes, edge cases, and performance considerations so you can make a clear decision about when to format and when to keep data compact. If you want practical, runnable examples and a few real‑world patterns you can copy into your codebase, you’re in the right place.

Why pretty printing is more than aesthetics

When JSON is compact, it’s great for transport and storage. When you’re trying to understand a problem, it’s a nightmare. Pretty printing is a developer‑experience feature: it reduces cognitive load and speeds up reviews, whether you’re debugging a request payload or inspecting a cached response.

In my experience, there are three moments where formatting saves time:

1) Debugging in logs: A single‑line JSON string makes it easy to miss keys or mismatch nesting. Proper indentation reveals structure and highlights missing fields quickly.

2) Code reviews and diffs: When JSON is pretty printed, diffs show actual changes instead of a big line change. This is especially important for configuration files and test fixtures.

3) Human‑in‑the‑loop data work: When I’m validating data samples for ETL, I want to scan 10–20 records quickly. Readable formatting makes that doable without extra tooling.

Think of it like formatting code: the machine doesn’t care, but the human needs clarity.

The core approach: json.dumps with indentation

Python’s built‑in json module gives you everything you need. The json.dumps function converts a Python object to a JSON string, and the indent parameter controls pretty printing.

Here’s a minimal, runnable example using data that looks like typical HR records:

import json

employees = [

{"employee_id": 1, "name": "Abhishek", "title": "Software Engineer"},

{"employee_id": 2, "name": "Garima", "title": "Email Marketing Specialist"}

]

Compact JSON (single line)

compact = json.dumps(employees)

print(compact)

Pretty JSON (2 spaces per level)

pretty = json.dumps(employees, indent=2)

print(pretty)

I usually start with indent=2 because it’s readable and consistent with most modern style guides. If you’re formatting for a terminal display and you want extra breathing room, indent=4 is common in Python‑first teams.

A quick note on indentation levels

Indentation isn’t a correctness feature; it’s a readability choice. The value you choose should be consistent across your project. If you’re writing JSON files that will be reviewed or committed, pick a standard (2 or 4 spaces) and stick to it. I often standardize this with a small helper function (see later sections).

Pretty printing from a file

Pretty printing isn’t only for in‑memory objects. You often need to load a JSON file, transform it, and print it in a readable format. The key functions are:

  • json.load() to parse JSON from a file object
  • json.dump() to write JSON to a file object

Here’s a full example that reads a file, pretty prints it to stdout, and then writes a formatted copy to a new file:

import json

from pathlib import Path

input_path = Path("employees.json")

output_path = Path("employees.pretty.json")

with input_path.open("r", encoding="utf-8") as f:

data = json.load(f)

Pretty print to console

print(json.dumps(data, indent=2))

Write prettified JSON to a new file

with output_path.open("w", encoding="utf-8") as f:

json.dump(data, f, indent=2)

I recommend pathlib for modern Python code; it’s concise and works across platforms. Always specify encoding="utf-8" so you don’t get surprised by default system encodings.

Controlling formatting: indent, sortkeys, separators, ensureascii

Pretty printing in real projects often needs a bit more than indent. Here are the formatting options I use most:

sort_keys for stable diffs

If you want consistent ordering for objects (helpful in code reviews and test fixtures), use sort_keys=True.

import json

config = {"z": 3, "a": 1, "m": 2}

print(json.dumps(config, indent=2, sort_keys=True))

This yields a stable, alphabetical key order, which makes diffs clean and makes it easier to scan for a specific field.

separators for compact vs pretty output

The separators parameter lets you control punctuation spacing. I use this when I want compact JSON for transport without altering data.

import json

payload = {"event": "signup", "source": "mobile", "ts": "2026-01-23T10:15:00Z"}

compact = json.dumps(payload, separators=(",", ":"))

print(compact)

This strips spaces after commas and colons. It’s a good choice for logs or network payloads. For pretty output, don’t override separators; let indent handle spacing.

ensure_ascii for real‑world text

By default, Python escapes non‑ASCII characters. If you’re working with names or localized data, this makes output ugly. Set ensure_ascii=False to preserve Unicode characters.

import json

profile = {"name": "José", "city": "München"}

print(json.dumps(profile, indent=2, ensure_ascii=False))

This prints readable characters. If you later write this to disk, keep encoding="utf-8".

A reusable pretty print helper (cleaner code, fewer mistakes)

I rarely call json.dumps directly in multiple places. Instead, I create a helper so formatting is consistent and centralized.

import json

from typing import Any

def pretty_json(data: Any) -> str:

"""Return JSON formatted with consistent, readable settings."""

return json.dumps(

data,

indent=2,

sort_keys=True,

ensure_ascii=False

)

payload = {"z": 3, "a": 1, "name": "Zoë"}

print(pretty_json(payload))

Why this matters: when formatting is standardized, your logs, debug prints, and file outputs all look the same. That consistency reduces friction when you jump between projects or tools.

When pretty printing is the wrong choice

Pretty printing is helpful, but it has costs. I avoid it in these situations:

1) High‑throughput logging: If you’re logging thousands of events per second, pretty output inflates size and reduces log throughput. I usually keep logs compact and pretty print only in debug modes.

2) Network payloads: Pretty JSON is larger, which means more bandwidth and slower responses. In APIs, I send compact JSON and only format for debugging or local testing.

3) Latency‑sensitive paths: If you’re formatting JSON inside a tight loop, the extra string building can add latency. The cost is usually small, but it can matter at scale.

My rule: if humans need to read it immediately, pretty print. If machines are the primary consumers, keep it compact.

Common mistakes I see (and how to avoid them)

Here are the pitfalls I encounter most when mentoring developers or reviewing code:

Mistake 1: Pretty printing invalid JSON strings

If your JSON string is invalid, json.loads will throw an exception. Always validate or wrap it in a try/except before pretty printing.

import json

raw = ‘{"name": "Lisa", "age": 34,}‘ # trailing comma is invalid

try:

data = json.loads(raw)

print(json.dumps(data, indent=2))

except json.JSONDecodeError as e:

print(f"Invalid JSON: {e}")

Mistake 2: Confusing json.dumps and json.dump

  • json.dumps returns a string
  • json.dump writes to a file

I still see json.dump(data) without a file handle, which raises a TypeError. Keep the usage clear.

Mistake 3: Printing Python dicts instead of JSON

A Python dict prints with single quotes and isn’t valid JSON. If someone copies it into a JSON parser, it fails. Always use json.dumps if you need JSON output.

Mistake 4: Forgetting encoding

If you pretty print and write to disk without specifying encoding, non‑ASCII characters can get corrupted on some systems. Use encoding="utf-8" every time.

Real‑world scenarios and edge cases

Pretty printing seems simple until you handle live data. Here are patterns I rely on.

Handling large JSON objects safely

If you’re working with big payloads, formatting can balloon memory usage. I avoid printing full payloads in logs. Instead, I trim or sample.

import json

def prettypreview(data, maxitems=3):

"""Pretty print only the first N items of a list payload."""

if isinstance(data, list):

preview = data[:max_items]

else:

preview = data

return json.dumps(preview, indent=2, ensure_ascii=False)

This gives me a readable preview without dumping thousands of elements. For deeply nested structures, I sometimes extract a specific path (like data[0]["payload"]).

Logging JSON without breaking structured log collectors

If you’re using a log collector that expects JSON on each line, pretty printing can break ingestion. In those cases, I log compact JSON and pretty print only when I’m running locally or in an interactive debug session.

Sorting keys for deterministic snapshots

When writing JSON snapshots for tests, I always set sort_keys=True. This ensures the file doesn’t change order between runs, which avoids noisy diffs.

Preserving precision for decimals

The json module converts Python floats to JSON numbers. If you’re dealing with currency or high‑precision data, consider converting decimals to strings before dumping. That isn’t a pretty printing issue, but it’s a real‑world pitfall.

A practical workflow: API response inspection

Here’s a real pattern I use when debugging API responses in a CLI tool or a script.

import json

import urllib.request

url = "https://api.example.com/users/123"

with urllib.request.urlopen(url) as response:

raw = response.read().decode("utf-8")

Parse and pretty print

try:

data = json.loads(raw)

print(json.dumps(data, indent=2, ensureascii=False, sortkeys=True))

except json.JSONDecodeError:

print("Response was not valid JSON")

I recommend this pattern because it keeps parsing and formatting in one place and gives you readable output in seconds. If you’re using requests, the same idea applies: json.loads(response.text) or response.json() if the server is well‑behaved.

Traditional vs modern approaches (and what I recommend)

Here’s a quick comparison of common approaches I see in teams today.

Approach

Traditional

Modern (2026‑style)

Recommendation

Pretty printing

print(json.dumps(data, indent=4)) scattered everywhere

A single helper function with consistent settings

Use a helper for consistent formatting

File formatting

Manual open/close, no encoding specified

pathlib + encoding="utf-8"

Use pathlib and explicit encoding

Logging

Pretty print in prod logs

Compact JSON with optional debug flag

Keep prod logs compact, pretty print only in debug

Diffs

Unsorted keys

sort_keys=True in snapshots

Always sort keys for deterministic outputsI recommend the modern approach because it makes formatting a conscious, repeatable choice rather than a one‑off hack.

Performance considerations (realistic ranges)

Pretty printing does extra work: it adds whitespace and builds larger strings. In practice, for small payloads (a few KB), I usually see formatting take around 0.1–2 ms per object on modern laptops. For large payloads (hundreds of KB or MB), that can climb to 10–50 ms or more, and memory usage grows with the expanded output.

If performance is a concern:

  • Keep output compact in hot paths.
  • Pretty print only for targeted debug traces.
  • Use sampling or previews for large lists.

When you’re in a CI pipeline or local tooling context, the overhead is usually acceptable and the readability benefit is worth it.

A compact CLI‑style pretty printer script

If you want a quick utility script to pretty print JSON from stdin or a file, this pattern is clean and easy to maintain.

import json

import sys

from pathlib import Path

def prettyprintjson(text: str) -> str:

data = json.loads(text)

return json.dumps(data, indent=2, ensureascii=False, sortkeys=True)

def main():

if len(sys.argv) > 1:

path = Path(sys.argv[1])

text = path.read_text(encoding="utf-8")

else:

text = sys.stdin.read()

try:

print(prettyprintjson(text))

except json.JSONDecodeError as e:

sys.exit(f"Invalid JSON: {e}")

if name == "main":

main()

This lets you run:

  • python pretty_json.py payload.json
  • cat payload.json | python pretty_json.py

It’s simple, reliable, and a great tool to keep around.

When to choose YAML or other formats instead

Some teams reach for YAML to get readable configs. That can be fine, but it introduces its own pitfalls (indentation sensitivity, implicit typing). If your system expects JSON or you want stronger guarantees around parsing, pretty JSON is still a solid choice. I usually keep JSON for machine‑readable configs and reserve YAML for human‑maintained documents that don’t need strict validation.

If you’re already on JSON and just need readability, pretty printing gives you most of the usability without changing formats.

Practical checklist for production‑quality pretty printing

Here’s the short checklist I use when I want pretty output that won’t cause trouble later:

  • Use indent=2 (or 4) consistently
  • Add sort_keys=True for stable diffs
  • Set ensure_ascii=False for real‑world text
  • Use encoding="utf-8" for file IO
  • Avoid pretty output in hot paths or production logs
  • Wrap parsing in a try/except to handle invalid JSON safely

If you follow these six points, your JSON output will be readable, stable, and reliable.

A final example: formatting config files safely

Here’s a more complete example that validates and formats a config file in place. I use this pattern when I want to normalize JSON configs before checking them into version control.

import json

from pathlib import Path

config_path = Path("service.config.json")

text = configpath.readtext(encoding="utf-8")

try:

config = json.loads(text)

except json.JSONDecodeError as e:

raise SystemExit(f"Invalid config JSON: {e}")

Write back formatted config

configpath.writetext(

json.dumps(config, indent=2, sortkeys=True, ensureascii=False),

encoding="utf-8"

)

That last line is intentionally explicit: I want both the indentation and the encoding to be obvious when I read the file later.

New section: Pretty printing nested objects without losing context

Deeply nested JSON is where readability breaks down fastest. If your object has 6–10 levels of nesting, you can still pretty print it, but you may not want to print the entire tree every time. What I do instead is pretty print selective subtrees.

Targeted pretty printing by path

If I want to inspect payload[0].metadata from a large API response, I’ll extract just that part before formatting it.

import json

def pretty_path(data, *keys):

"""Extract a nested path from dicts/lists and pretty print it."""

current = data

for key in keys:

current = current[key]

return json.dumps(current, indent=2, ensureascii=False, sortkeys=True)

response = {

"status": "ok",

"payload": [

{"id": 1, "metadata": {"source": "web", "flags": ["a", "b"]}},

{"id": 2, "metadata": {"source": "mobile", "flags": ["c"]}}

]

}

print(pretty_path(response, "payload", 0, "metadata"))

This gives me clarity without the noise of unrelated keys. It’s also safer for logs because you’re reducing the chance of accidentally dumping sensitive fields.

Safe path extraction with defaults

In production data, paths aren’t always guaranteed. A safe approach is to wrap path traversal in a try/except or use a helper that returns None if something is missing.

import json

def safeprettypath(data, path):

"""Safely extract a nested path; return a readable message if missing."""

current = data

for key in path:

try:

current = current[key]

except (KeyError, IndexError, TypeError):

return ""

return json.dumps(current, indent=2, ensureascii=False, sortkeys=True)

This is the difference between a quick debug session and an hour of chasing a KeyError.

New section: Pretty printing and custom types (dataclasses, enums, decimals)

The json module only handles basic types by default: dict, list, str, int, float, bool, and None. Modern Python code uses dataclasses, enums, and decimals all the time, which leads to TypeError: Object of type X is not JSON serializable.

A clean default function for custom types

You can give json.dumps a default callback that converts unsupported objects into serializable forms.

import json

from dataclasses import dataclass

from decimal import Decimal

from enum import Enum

class Status(Enum):

ACTIVE = "active"

INACTIVE = "inactive"

@dataclass

class User:

id: int

name: str

balance: Decimal

status: Status

def default_serializer(obj):

if isinstance(obj, Decimal):

return str(obj) # preserve precision

if isinstance(obj, Enum):

return obj.value

if hasattr(obj, "dict"):

return obj.dict

raise TypeError(f"Type {type(obj).name} is not JSON serializable")

user = User(id=1, name="Sam", balance=Decimal("19.99"), status=Status.ACTIVE)

print(json.dumps(user, indent=2, default=defaultserializer, ensureascii=False))

This approach keeps your pretty printing robust even as your data structures evolve. It also lets you keep precision for money and keep enums human‑readable.

Dataclasses shortcut with asdict

When you know you’re dealing with dataclasses, you can use dataclasses.asdict() before pretty printing:

from dataclasses import asdict

print(json.dumps(asdict(user), indent=2, ensure_ascii=False))

It’s simple and clear, but it won’t handle Decimal or Enum without extra conversion.

New section: Using json.loads vs json.load for validation

Pretty printing often starts with raw JSON text from a file, API, or message queue. There’s a subtle but useful distinction:

  • json.loads(text) expects a string and lets you validate or transform before writing.
  • json.load(file) reads directly from a file object.

When I’m validating user input or a response body, I always go through json.loads so I can log or inspect the raw text in case of errors. When I’m reading a known file, json.load is simpler and just as safe.

A tiny pattern I use in CLI tools:

import json

def parse_json(text: str):

try:

return json.loads(text)

except json.JSONDecodeError as e:

# This is where I log or surface the raw string if needed

raise ValueError(f"Invalid JSON: {e}")

This makes parsing explicit and keeps your error handling centralized.

New section: Pretty printing in tests (snapshots and fixtures)

Pretty JSON is great for tests because it makes fixtures readable and diffs meaningful. The trap is non‑deterministic ordering. Always sort keys in test fixtures and snapshots.

A deterministic snapshot writer

Here’s a helper I use in unit tests:

import json

from pathlib import Path

def write_snapshot(path: Path, data):

path.write_text(

json.dumps(data, indent=2, sortkeys=True, ensureascii=False),

encoding="utf-8"

)

That one helper makes fixture updates consistent across machines and Python versions. It also ensures that if a dictionary order changes, you don’t get a noisy diff.

When not to pretty print in tests

If your test intentionally compares raw API responses or compressed payloads, pretty printing can conceal issues. In those cases, keep raw strings and compare exactly. I only pretty print when the goal is human readability.

New section: Pretty printing for logs without bloating production

Logs are where JSON formatting gets complicated. You want readability for debugging, but you also want efficiency in production.

A simple pattern: compact in prod, pretty in dev

I keep a DEBUG flag and format accordingly:

import json

DEBUG = True

payload = {"event": "signup", "user": {"id": 123, "plan": "pro"}}

if DEBUG:

print(json.dumps(payload, indent=2, ensureascii=False, sortkeys=True))

else:

print(json.dumps(payload, separators=(",", ":")))

This pattern keeps your production logs lean but gives you readability when you need it.

Log collectors and one‑line JSON

If your log collector expects one JSON object per line, pretty printing may break ingestion because it adds line breaks. In that case, keep logs compact and do the pretty formatting only when you view the logs locally or in a debug dashboard.

New section: Streaming JSON and partial pretty printing

Sometimes you’re not holding the whole JSON in memory (large files, streaming APIs, or logs). The standard json module doesn’t stream pretty printing, but you can still handle large files by loading them in chunks or by pretty printing only a prefix.

A practical approach: preview and truncate

I often use a preview function when inspecting massive payloads:

import json

def prettytruncate(data, maxchars=2000):

text = json.dumps(data, indent=2, ensureascii=False, sortkeys=True)

if len(text) <= max_chars:

return text

return text[:max_chars] + "\n... "

This keeps your terminal or log output manageable without losing the key structure.

A note on JSON Lines (NDJSON)

If your data is one JSON object per line (common for logs), pretty printing isn’t a good fit because it breaks the line structure. In those cases, parse each line and pretty print it individually when you inspect it locally, but keep the stored data compact.

New section: Differences between print, pprint, and json.dumps

Python’s pprint module can make dicts look nice, but it’s not JSON. For real JSON output, you need json.dumps.

  • print(obj) uses Python’s repr and single quotes.
  • pprint(obj) makes it readable but still not JSON.
  • json.dumps(obj) produces valid JSON and is safe to copy into tools.

If your goal is to share output with an API, a config file, or a teammate who will paste it into a JSON parser, always use json.dumps.

New section: Pretty printing while preserving order

In some cases, key order carries meaning, even if JSON as a standard doesn’t require it. Python 3.7+ preserves insertion order for dicts, so if you load JSON and dump it back out without sorting, the order is preserved.

When to preserve order

  • When the data has a human‑curated order
  • When you want diffs that follow the original file structure
  • When you’re pretty printing for a user‑facing config file

When to sort keys instead

  • When order isn’t meaningful and you want stability
  • When you’re generating snapshots and want deterministic output

Choose one and be consistent. If you’re in a mixed environment, consider allowing a toggle like sort_keys in your helper.

New section: A flexible pretty printer helper for teams

A helper can do more than format; it can encode your team’s preferences. Here’s a slightly more advanced helper I’ve used in multiple projects:

import json

from typing import Any

def prettyjson(data: Any, *, indent=2, sortkeys=True, ensure_ascii=False) -> str:

"""Pretty print JSON with sensible defaults and optional overrides."""

return json.dumps(

data,

indent=indent,

sortkeys=sortkeys,

ensureascii=ensureascii

)

This lets you choose standard defaults while still being flexible. It also makes tests easy: call pretty_json(data) and compare against a snapshot.

New section: Error handling patterns that scale

Pretty printing is only one part of a robust JSON workflow. If you’re building tools that process user input or external data, you need to handle errors gracefully.

Example: Safe pretty print with explicit errors

import json

def safeprettyjson(text: str) -> str:

try:

data = json.loads(text)

except json.JSONDecodeError as e:

raise ValueError(f"Invalid JSON input: {e}")

return json.dumps(data, indent=2, ensureascii=False, sortkeys=True)

This wraps the raw exception in a higher‑level error that’s easier to surface in a CLI tool or web app.

Tip: Include context in errors

If you’re running this in production, log the location or source of the bad JSON, not the content itself (to avoid leaking sensitive data). A simple message like “Invalid JSON in payload from service X” is usually enough.

New section: Pretty printing in data pipelines

Data pipelines often do transformations between JSON and other formats. Pretty printing is usually not used in the pipeline itself, but it’s extremely helpful for spot checks.

Spot‑check example

import json

def inspect_record(record):

# record is a Python dict from your pipeline

print(json.dumps(record, indent=2, ensureascii=False, sortkeys=True))

Use this when you’re sampling records or investigating anomalies. I usually keep it behind a debug flag to avoid huge logs.

Where I avoid pretty printing in pipelines

  • When writing to storage (S3, data lake, warehouse)
  • When streaming data at high throughput
  • When emitting JSON lines for processing

In those cases, I keep data compact and readable through tooling (not formatting).

New section: Pretty printing configuration files with validation

I often need to normalize config files while validating them. It’s a small but useful workflow that prevents messy commits.

Combined validate + format step

import json

from pathlib import Path

def normalize_config(path: Path):

text = path.read_text(encoding="utf-8")

try:

data = json.loads(text)

except json.JSONDecodeError as e:

raise SystemExit(f"Invalid JSON in {path}: {e}")

path.write_text(

json.dumps(data, indent=2, sortkeys=True, ensureascii=False),

encoding="utf-8"

)

normalize_config(Path("service.config.json"))

This is a good pre‑commit step or a script you can run before reviewing a PR.

New section: Practical diffs and JSON formatting in version control

The reason pretty printing is so valuable for version control is that diffs become meaningful. With compact JSON, a small change can look like the entire file changed.

A diff‑friendly workflow

  • Normalize JSON configs with indent=2 and sort_keys=True
  • Commit formatted files to keep diffs readable
  • Use a lint or format check if your repo has multiple authors

If you’re working with generated JSON (like API snapshots), add a dedicated formatting step to your script before committing the file. It’s a small step that improves collaboration.

New section: Troubleshooting common JSON decoding errors

If you see JSONDecodeError, there are some repeat offenders:

1) Trailing commas: Not allowed in JSON.

2) Single quotes: JSON requires double quotes.

3) Comments: JSON doesn’t allow comments (use JSON5 or strip comments first).

4) NaN or Infinity: Not valid JSON; some libraries emit them, but the standard doesn’t allow it.

A quick validator helper

import json

def validate_json(text: str) -> bool:

try:

json.loads(text)

return True

except json.JSONDecodeError:

return False

Use this to quickly check whether your input is valid before attempting a pretty print.

New section: Pretty printing with consistent spacing and style

Indentation is the big lever, but whitespace style also matters. For example, some teams prefer a trailing space after colons; others don’t care. Python’s default pretty printing does the right thing for most teams, but if you need exact control, you can define separators.

Example: Extra‑compact pretty printing

You can combine indent with custom separators to reduce whitespace while keeping structure:

import json

print(json.dumps({"a": 1, "b": {"c": 2}}, indent=2, separators=(",", ": ")))

This produces a clean format with less extra spacing than default.

New section: Pretty printing with stable ordering for nested objects

sort_keys=True sorts dictionary keys at every level. If you want deterministic nested structures, this is your best option. It can make your JSON look less “natural,” but it’s great for snapshots and data exports.

If you prefer original insertion order, avoid sorting but ensure your data is built consistently (e.g., build dicts in the same order every run).

New section: A real‑world logging utility (copy‑paste friendly)

When I build CLI tools, I keep a JSON logger that can switch between compact and pretty depending on a flag.

import json

def log_json(data, pretty=False):

if pretty:

print(json.dumps(data, indent=2, ensureascii=False, sortkeys=True))

else:

print(json.dumps(data, separators=(",", ":")))

It’s a tiny function, but it keeps your scripts clean and your logs consistent.

New section: Avoiding accidental data leaks in pretty output

Pretty printing can make it easier to see sensitive data, which means it can also make it easier to leak sensitive data. If you’re logging production payloads, always consider redaction.

A redaction pattern

import json

SENSITIVEKEYS = {"password", "token", "apikey", "secret"}

def redact(data):

if isinstance(data, dict):

return {

k: ("" if k in SENSITIVE_KEYS else redact(v))

for k, v in data.items()

}

if isinstance(data, list):

return [redact(v) for v in data]

return data

payload = {"user": "ana", "token": "abc123", "profile": {"api_key": "xyz"}}

print(json.dumps(redact(payload), indent=2, ensure_ascii=False))

This is a practical pattern if you’re printing data that might include secrets. I’ve used this in debugging scripts and it has saved me from accidental leaks more than once.

New section: Pretty printing vs validation vs schema enforcement

Pretty printing doesn’t validate your JSON structure beyond syntax. If you need to ensure a certain schema, you should validate separately using a schema checker. I mention this because I’ve seen people rely on pretty printing as a proxy for “correctness.” It isn’t.

My mental model:

  • Pretty printing = readability
  • Parsing = syntactic validity
  • Schema validation = structural correctness

Keep those three separate and you’ll avoid a lot of confusion.

New section: How I teach juniors to think about JSON formatting

When I mentor new developers, I give them a simple rule:

  • If the JSON is for a machine, keep it compact.
  • If the JSON is for a human, pretty print it.
  • If it’s for both, store compact and add a pretty view in tooling.

That rule helps them make pragmatic decisions instead of copying whatever they saw in a blog post.

New section: A minimal “JSON pretty printer” library pattern

If you want to reuse pretty printing across multiple projects, it can be useful to centralize it in a tiny module.

# json_utils.py

import json

from typing import Any

DEFAULT_INDENT = 2

def toprettyjson(data: Any, *, indent=DEFAULT_INDENT) -> str:

return json.dumps(data, indent=indent, sortkeys=True, ensureascii=False)

Now you can import toprettyjson anywhere and keep formatting consistent.

New section: Troubleshooting “Object of type X is not JSON serializable”

This error pops up when you pass custom types. Use one of these fixes:

1) Convert custom types to plain dicts before dumping

2) Use default in json.dumps

3) Normalize your objects in a separate function

Here’s a practical normalization approach:

from dataclasses import asdict, is_dataclass

from decimal import Decimal

from enum import Enum

def normalize(obj):

if is_dataclass(obj):

return {k: normalize(v) for k, v in asdict(obj).items()}

if isinstance(obj, Enum):

return obj.value

if isinstance(obj, Decimal):

return str(obj)

if isinstance(obj, dict):

return {k: normalize(v) for k, v in obj.items()}

if isinstance(obj, list):

return [normalize(v) for v in obj]

return obj

Then you can do json.dumps(normalize(obj), indent=2) and stay safe.

New section: Pretty printing in interactive Python (REPL and notebooks)

If you’re working in a REPL or notebook, you can wrap a pretty print in a helper and keep your output clean. In notebooks, you might even prefer a JSON viewer, but a simple json.dumps with indent goes a long way.

import json

def show(data):

print(json.dumps(data, indent=2, ensureascii=False, sortkeys=True))

This is a tiny convenience that saves a lot of time when exploring APIs.

New section: Comparing JSON output with and without pretty print

Sometimes the best way to understand the value is to compare the two outputs side by side.

Compact:

{"user":{"id":123,"name":"Ava","prefs":{"theme":"dark","alerts":true}},"roles":["admin","editor"]}

Pretty:

{

"roles": [

"admin",

"editor"

],

"user": {

"id": 123,

"name": "Ava",

"prefs": {

"alerts": true,

"theme": "dark"

}

}

}

Even without any explanation, the pretty output is easier to scan. That’s the whole point.

New section: A complete end‑to‑end workflow example

Let’s put everything together in a realistic script: fetch JSON from an API, redact sensitive fields, and pretty print with a consistent style.

import json

import urllib.request

SENSITIVE_KEYS = {"token", "password", "secret"}

def redact(data):

if isinstance(data, dict):

return {

k: ("" if k in SENSITIVE_KEYS else redact(v))

for k, v in data.items()

}

if isinstance(data, list):

return [redact(v) for v in data]

return data

def fetch_json(url: str):

with urllib.request.urlopen(url) as response:

text = response.read().decode("utf-8")

return json.loads(text)

def pretty(data):

return json.dumps(data, indent=2, sortkeys=True, ensureascii=False)

url = "https://api.example.com/account/42"

try:

data = fetch_json(url)

print(pretty(redact(data)))

except json.JSONDecodeError:

print("Response was not valid JSON")

This small script demonstrates a realistic debugging flow and shows how pretty printing fits into a practical workflow.

New section: Quick decision guide (when to pretty print)

When I’m not sure whether to pretty print, I run through this mini checklist:

  • Is a human going to read this? → Yes = pretty print.
  • Is this going to production logs or a payload? → No = keep compact.
  • Is this for a test fixture or config file? → Yes = pretty print with sorted keys.
  • Is the payload huge? → Maybe pretty print a preview only.

This keeps decisions consistent and avoids the “we always pretty print everything” trap.

New section: Frequently asked questions

Is pretty printing required for valid JSON?

No. JSON is valid with or without whitespace and newlines. Pretty printing is purely for human readability.

Does pretty printing change the data?

No. It only adds whitespace. The data values and structure remain the same.

Is json.dumps the same as json.dump?

No. json.dumps returns a string. json.dump writes to a file handle.

What’s the best indentation value?

Use 2 or 4. Choose one and stay consistent in your project.

Can I pretty print JSON in place?

Yes. Read the file, parse it, and write it back with pretty formatting. Just make sure you have a backup if the file is critical.

Closing thoughts

Pretty printing JSON isn’t complicated, but it’s surprisingly impactful. It turns impossible‑to‑read data into structured information you can reason about quickly. It makes debugging faster, reviews clearer, and data workflows smoother. The best part is that Python gives you everything you need out of the box.

If you take away one idea, make it this: choose a formatting standard and make it consistent. Whether you’re writing logs, storing configs, or inspecting API responses, consistent pretty printing is one of those small quality‑of‑life improvements that pays off every day.

If you want a single practical takeaway to implement today, create a prettyjson helper with indent=2, sortkeys=True, and ensure_ascii=False. Then use it everywhere humans need to read JSON. That one tiny function will make your codebase calmer and your debugging sessions shorter.

Scroll to Top