declare Command in Linux (Bash) with Examples

The first time declare really saved me, I was debugging a “works on my machine” shell script that behaved differently in CI. The variable looked correct when echoed, but it had been exported in one place, treated as an integer in another, and accidentally shadowed inside a function. echo didn’t tell me any of that—declare -p did.

If you write Bash beyond one-liners—deployment scripts, build steps, dev tooling, container entrypoints—declare is one of the few features that gives you real control over variables and functions: you can set attributes (readonly, exported, integer, array/associative array, name references), inspect what’s actually in scope, and make your scripts harder to misuse.

I’m going to treat declare like a practical toolbelt: I’ll show you the syntax, the options I reach for most, how scope changes inside functions, and patterns I use in 2026-era workflows (CI runners, containers, task automation). Every example is runnable in Bash, and I’ll call out the footguns so you don’t learn them the hard way.

Why I reach for declare

Bash variables are deceptively simple. You assign a value and you’re done—until you’re not. The moment you care about any of the following, declare becomes worth using:

  • You want to enforce intent: constants should be readonly, numeric counters should behave like integers, configuration should be exported to subprocesses.
  • You want to inspect reality: not just a value, but whether it’s exported, readonly, an array, an associative array, or a nameref.
  • You want predictable scope: variables inside functions can shadow globals; declare -g lets you update a global intentionally.
  • You want safer APIs between functions: namerefs (declare -n) let you write “pass-by-reference” helpers without fragile eval.

One more practical note: declare is a Bash built-in. That means it’s fast (no process spawn) and always available anywhere Bash is available.

Syntax and a useful mental model

The syntax is:

declare [options] [name[=value]] [name[=value]] …

I think of declare as two commands in one:

1) A setter for attributes (and optionally a value)

declare -r app_name=‘release-orchestrator‘

2) An inspector for existing variables and functions

declare -p app_name

If you run declare with no names, it prints a lot of definitions from the current shell environment. That can be noisy, but it’s handy when you’re exploring an unknown shell session.

declare vs plain assignment

Plain assignment is fine for most values:

build_id=‘2026.02.08.1‘

But assignment alone cannot express “this should be readonly” or “this is an integer” or “this is an associative array”. declare can.

declare and typeset

In Bash, typeset is a synonym for declare. In mixed environments (people coming from ksh/zsh), you’ll see both. I prefer declare in Bash scripts because it reads like what it does.

Debugging and introspection with declare -p

If you only memorize one option, make it -p.

-p: print a variable with attributes

declare -p name prints a reusable declaration that includes attributes.

Try this in a Bash shell:

export deploy_env=‘staging‘

readonly pipeline_id=‘ci-74219‘

declare -i retry_budget=3

declare -p deployenv pipelineid retry_budget

You’ll see output shaped like:

declare -x deploy_env="staging"

declare -r pipeline_id="ci-74219"

declare -i retry_budget="3"

That’s already more information than echo can give you:

  • -x means exported
  • -r means readonly
  • -i means integer

Inspecting arrays and associative arrays

declare -p is also the cleanest way to confirm array types.

Indexed array:

declare -a service_ports

service_ports=(443 8080 9090)

declare -p service_ports

Associative array (Bash 4+):

declare -A http_headers

http_headers=(

[‘Accept‘]=‘application/json‘

[‘User-Agent‘]=‘release-orchestrator/1.4‘

)

declare -p http_headers

Even if you don’t love reading Bash’s array syntax, this is a reliable way to confirm what you actually have.

Using declare -p inside functions

A lot of scope bugs become obvious if you inspect inside and outside a function:

region=‘us-east-1‘

setregionlocally() {

declare region=‘eu-west-1‘

declare -p region

}

setregionlocally

declare -p region

You’ll observe two different region variables: one local to the function, one global. That’s intentional behavior, but it surprises people.

Variable attributes I use most (with real examples)

Here are the options I see in production shell scripts most often: integers (-i), readonly (-r), export (-x), arrays (-a), associative arrays (-A), case mapping (-l/-u), and namerefs (-n).

A quick map: “traditional” Bash vs attribute-driven Bash

Task

Traditional approach

Attribute-driven with declare

— Treat a counter as a number

count=$((count + 1))

declare -i count=0; count+=1 Constant value

“just don’t change it”

declare -r apibaseurl=‘https://api.internal‘ Make env var visible to subprocess

export name=value

declare -x name=‘value‘ Normalize case

external tools like tr

declare -l lower; lower=‘MiXeD‘ Map keys to values

awkward parsing

declare -A map; map[key]=value

-i: integer variables

If you set -i, Bash evaluates assignments as arithmetic.

declare -i max_parallel=8

max_parallel=4+4

echo "$max_parallel"

That prints 8.

This can make counters and budgets feel more “typed”:

#!/usr/bin/env bash

set -euo pipefail

declare -i retry_budget=5

attempt=0

while (( attempt < retry_budget )); do

attempt+=1

echo "attempt=$attempt"

# pretend a command fails the first two times

if (( attempt >= 3 )); then

echo ‘success‘

break

fi

done

Footgun I watch for: with -i, assignments like value=‘08‘ can be treated as arithmetic (and leading zeros historically caused octal-like surprises). If you’re dealing with IDs that merely look numeric (build numbers with leading zeros, account IDs), do not mark them as integers.

-r: readonly variables

Readonly is the cheapest way to prevent accidental mutation.

declare -r config_path=‘/etc/release-orchestrator/config.env‘

# later…

# config_path=‘/tmp/other‘ # this will fail

I use this for constants such as:

  • base directories
  • script version strings
  • “feature flags” resolved at startup

If you run with set -e, an attempt to reassign a readonly variable will terminate the script, which is usually exactly what you want.

-x: exported variables

Exporting is about child processes. If you start a subprocess (Python, Node, curl, docker, terraform, etc.), exported variables become environment variables for that subprocess.

declare -x DEPLOY_ENV=‘staging‘

python -c "import os; print(os.environ[‘DEPLOY_ENV‘])"

I prefer declare -x when I’m already declaring other attributes nearby. Otherwise, plain export is fine.

Security note: exporting secrets makes them visible to any child process you execute. In CI, that can include helpers you didn’t realize were invoked. I keep secret exports scoped as tightly as possible (often inside a function), and I avoid set -x (trace) around secret handling.

-l and -u: automatic case conversion (Bash 4+)

These are underrated. They’re built-in case mapping on assignment.

declare -l normalized_env

normalized_env=‘StAgInG‘

echo "$normalized_env" # prints ‘staging‘

declare -u normalized_region

normalized_region=‘us-east-1‘

echo "$normalized_region" # prints ‘US-EAST-1‘

I use this when parsing command-line flags or environment inputs where human-entered casing varies.

Footgun I watch for: this changes the stored value. If you need the original for display or logging, store the raw input separately.

Arrays and associative arrays (-a, -A)

Bash arrays are the first step from “script” to “program”. They let you avoid fragile string parsing.

-a: indexed arrays

Indexed arrays are for ordered lists.

#!/usr/bin/env bash

set -euo pipefail

declare -a rollout_regions

rollout_regions=(

‘us-east-1‘

‘us-west-2‘

‘eu-west-1‘

)

for region in "${rollout_regions[@]}"; do

echo "deploying to $region"

done

Notes I stick to:

  • Use "${arr[@]}" (not "${arr[*]}") when you want to preserve elements safely.
  • Always quote the expansion.

-A: associative arrays

Associative arrays are for maps/dictionaries: key → value.

#!/usr/bin/env bash

set -euo pipefail

declare -A service_urls

service_urls=(

[‘auth‘]=‘https://auth.internal‘

[‘billing‘]=‘https://billing.internal‘

[‘search‘]=‘https://search.internal‘

)

selected_service=‘billing‘

echo "${serviceurls[$selectedservice]}"

This beats parsing key=value lines in a loop when your mapping is in-script.

Compatibility note: associative arrays require Bash 4+. Many Linux distros ship Bash 5.x today, but minimal containers or embedded systems sometimes ship BusyBox sh (no arrays) or dash (no arrays). If you’re writing scripts for unknown environments, check early:

if [[ -z "${BASH_VERSINFO[*]-}" ]]; then

echo ‘This script requires bash.‘ >&2

exit 2

fi

if (( BASH_VERSINFO[0] < 4 )); then

echo ‘This script requires Bash 4+ for associative arrays.‘ >&2

exit 2

fi

Scope rules and declare -g for intentional globals

Scope is where declare feels like a power tool.

Default behavior inside functions: locals

Inside a function, declare name=value creates a local variable by default (similar to local). That’s often what you want.

set_defaults() {

declare timeout_seconds=30

echo "inside timeoutseconds=$timeoutseconds"

}

set_defaults

echo "outside timeoutseconds=${timeoutseconds-}"

Outside the function, timeout_seconds is unset.

-g: force global scope (Bash 4.2+)

Sometimes you want a function to update a global variable (for example, after probing the environment). That’s what -g is for.

#!/usr/bin/env bash

set -euo pipefail

detectshellcapabilities() {

# pretend we detect a feature

declare -g hasassocarrays=‘yes‘

}

detectshellcapabilities

echo "$hasassocarrays"

Without -g, you’d silently create a local variable and the caller wouldn’t see the result.

A pattern I use: “global config, local work”

I typically keep:

  • configuration variables global (often readonly once resolved)
  • temporary variables local inside functions

Example:

#!/usr/bin/env bash

set -euo pipefail

declare -r script_version=‘1.7.0‘

declare deploy_env=‘staging‘

parse_args() {

while (( $# )); do

case "$1" in

–env)

shift

declare -g deploy_env="$1"

;;

*)

echo "unknown arg: $1" >&2

exit 2

;;

esac

shift

done

}

parse_args "$@"

echo "version=$scriptversion env=$deployenv"

That uses declare -g so parseargs updates the global deployenv intentionally.

Function introspection: -f and -F

Bash also treats declare as a function inspection tool.

-f: print function definitions

declare -f prints function definitions.

build_release() {

echo ‘building…‘

}

declare -f build_release

This is handy when you’re debugging a complex shell session (or a sourced script) and want to confirm which function body is currently defined.

-F: print function names and attributes only

declare -F prints just names (and some attributes), not the function body.

declare -F

I use this when I’m verifying that a function exists without dumping a wall of code.

Advanced pattern: namerefs (declare -n) for safer helpers

If you’ve ever seen a Bash script reach for eval to “return” values, you’ve seen a risk: quoting mistakes and unexpected input can turn into code execution.

Namerefs are the better tool when you want a helper to write into a variable chosen by the caller.

-n: name reference variables (Bash 4.3+)

Here’s a helper that parses KEY=VALUE and writes into two caller-provided variables.

#!/usr/bin/env bash

set -euo pipefail

split_kv() {

declare -n out_key="$1"

declare -n out_value="$2"

declare input="$3"

out_key="${input%%=*}"

out_value="${input#*=}"

}

declare key value

splitkv key value ‘DEPLOYENV=staging‘

echo "key=$key value=$value"

This reads cleanly:

  • The caller passes variable names (key, value) and the input.
  • The function assigns to the caller’s variables without eval.

Rules I follow with namerefs:

  • Validate arguments early (ensure the provided names are not empty).
  • Keep the nameref lifetime short (declare it inside the function and don’t pass it around).
  • Avoid collisions with special variable names.

A real-world use: updating an associative array by reference

If you build small “library” scripts, namerefs make them much nicer.

#!/usr/bin/env bash

set -euo pipefail

put_header() {

declare -n headers_ref="$1"

declare header_name="$2"

declare header_value="$3"

headersref["$headername"]="$header_value"

}

declare -A headers

put_header headers ‘Accept‘ ‘application/json‘

put_header headers ‘X-Request-ID‘ ‘req-93f2b2‘

for k in "${!headers[@]}"; do

echo "$k: ${headers[$k]}"

done

This is one of those places where Bash starts to feel less like string glue and more like a real scripting language.

Common mistakes (and how I avoid them)

These are the issues I see repeatedly in code reviews.

1) Expecting declare inside a function to affect the caller

By default it won’t, because it makes locals.

Fix: use declare -g for intentional global updates, or return data via stdout, or use a nameref.

2) Losing array elements due to incorrect expansion

If you use ${arr[*]} unquoted, elements get smashed together.

Safer loop:

for item in "${items[@]}"; do

do_something "$item"

done

3) Marking “IDs” as integers

If you store things like account IDs, build IDs, or anything with leading zeros, don’t set -i. Treat them as strings.

4) Exporting too early (or too broadly)

Exporting is contagious: once it’s in the environment, every child process sees it.

Fix: export as late as possible and scope it.

runwithtoken() {

declare -x API_TOKEN="$1"

shift

"$@"

}

5) Using declare when plain assignment is clearer

I use declare for attributes and for inspection. For simple local variables, plain assignment is often more readable.

do_work() {

local_path=‘/tmp/workdir‘

# clear enough without declare

}

(Yes, declare can do that too; I just don’t think it earns its keep there.)

When I do and don’t use declare

Here’s my rule-of-thumb guidance.

Use declare when:

  • you want attributes: -r, -i, -x, -a, -A, -l, -u, -n, -g
  • you’re debugging scope or types and want declare -p
  • you’re writing functions meant to be reused (and want predictable contracts)

I usually don’t use declare when:

  • I’m assigning a simple temporary inside a function and the value is obvious
  • portability matters and the script might run under sh (POSIX shell) rather than Bash
  • the “type” is mostly performative (for example, forcing integers for values that never do arithmetic)

Quick reference: the declare flags I actually use

If you want a cheat sheet you can keep in your head, it’s this:

  • declare -p var → print definition + attributes (my #1 debugging tool)
  • declare -r var=value → constant
  • declare -x var=value → exported to subprocesses
  • declare -i var=0 → integer arithmetic assignments
  • declare -a arr=(...) → indexed array
  • declare -A map=(...) → associative array (Bash 4+)
  • declare -n ref=other_var → nameref (Bash 4.3+)
  • declare -g var=value → set global inside function (Bash 4.2+)
  • declare -f func / declare -F → inspect functions

And one subtle but powerful feature:

  • Use + instead of - to remove an attribute.

Turning attributes on and off with +

This is something people miss because it’s not obvious until you need it.

Remove -x (stop exporting)

Sometimes you want to export a variable for one phase, then ensure it doesn’t leak into later subprocesses.

#!/usr/bin/env bash

set -euo pipefail

declare -x DEPLOY_ENV=‘staging‘

sometoolthatreadsenv

# Stop exporting (variable remains set in the current shell)

declare +x DEPLOY_ENV

# Now DEPLOY_ENV won‘t be in the environment of child processes

another_tool

This is a nice alternative to unset when you still want the variable available within the script.

Remove -i if you realize you need a string

If you accidentally declared something as an integer and later discover it contains non-numeric characters, it’s better to remove the attribute than to fight weird coercions.

declare -i build_number=0

build_number=42

# later you decide you need a string like "042"

declare +i build_number

build_number=‘042‘

Practical scenario: robust config loading without eval

A common “real script” problem is reading configuration from environment variables and .env-style files safely.

My baseline approach:

  • Treat configuration as strings by default.
  • Convert only what you must (like timeouts) into integers.
  • Mark final config readonly after validation.
  • Export only what subprocesses truly need.

Here’s a self-contained pattern:

#!/usr/bin/env bash

set -euo pipefail

die() { echo "error: $*" >&2; exit 1; }

# Defaults

declare DEPLOYENV="${DEPLOYENV-staging}"

declare TIMEOUTSECONDS="${TIMEOUTSECONDS-30}"

declare APIBASEURL="${APIBASEURL-https://api.internal}"

validate_config() {

# Normalize casing where it helps

declare -l envlc="$DEPLOYENV"

case "$env_lc" in

staging

prod

dev) ;;

*) die "DEPLOYENV must be dev

staging

prod (got: $DEPLOYENV)" ;;

esac

DEPLOYENV="$envlc"

# Convert timeout to int safely

if [[ "$TIMEOUT_SECONDS" =~ ^[0-9]+$ ]]; then

declare -i t="$TIMEOUT_SECONDS"

(( t > 0 && t <= 600 )) || die "TIMEOUTSECONDS out of range (1..600): $TIMEOUTSECONDS"

else

die "TIMEOUTSECONDS must be an integer (got: $TIMEOUTSECONDS)"

fi

# Lock down config

declare -r DEPLOYENV TIMEOUTSECONDS APIBASEURL

}

validate_config

echo "env=$DEPLOYENV timeout=$TIMEOUTSECONDS base=$APIBASEURL"

I like this because it keeps declare doing what it’s best at: expressing intent (types/constraints) and making it hard to accidentally mutate config later.

Practical scenario: CI scripts and the “why is this exported?” mystery

CI systems often inject environment variables automatically. I’ve debugged scripts where:

  • a variable exists “magically” in CI but not locally
  • a secret gets exported and ends up in logs
  • a value is overwritten by a step you forgot runs in the same shell

declare -p is my first diagnostic step.

A debugging routine I use

When I suspect “attribute drift”, I’ll add a small diagnostic function gated by a flag.

debug_vars() {

[[ "${DEBUG_SHELL-}" == "1" ]] || return 0

echo ‘— debug vars —‘ >&2

declare -p DEPLOY_ENV 2>/dev/null || true

declare -p API_TOKEN 2>/dev/null || true

declare -p TIMEOUT_SECONDS 2>/dev/null || true

echo ‘——————‘ >&2

}

Then I call debug_vars early in the script. The 2>/dev/null prevents failure if a variable is unset, which matters if you also use set -u.

set -u + declare: a safe pattern

set -u (nounset) is great, but it changes how you should check variables.

Instead of directly referencing a possibly unset variable:

# DON‘T under set -u

if [[ -n "$MAYBE_SET" ]]; then

fi

Use parameter expansion defaults:

if [[ -n "${MAYBE_SET-}" ]]; then

fi

And when printing with declare -p, guard it:

declare -p MAYBESET 2>/dev/null || echo ‘MAYBESET is unset‘

Performance note: built-ins vs spawning processes

One reason I lean on declare -l/-u and arrays is performance and reliability.

  • declare is a built-in, so it avoids forking.
  • Using tr, awk, sed, or cut for tiny transformations adds process overhead and can introduce quoting/locale surprises.

I’m not obsessive about micro-optimizations, but in CI and containers you can easily run the same helpers hundreds of times. Built-ins keep scripts snappy and easier to reason about.

The -t trace attribute (when you’re deep in debugging)

Bash has a trace attribute you can set with declare -t.

In practice, I treat this as “advanced debugging mode”: it can interact with Bash’s tracing and DEBUG traps in ways that aren’t obvious if you’ve never used them. If you’re already using:

  • set -x to trace commands
  • trap ‘...‘ DEBUG to run code before each command
  • function tracing options like set -o functrace

…then declare -t can be part of your toolbox.

Example (simple demonstration):

#!/usr/bin/env bash

set -euo pipefail

trap ‘echo "DEBUG: running $BASH_COMMAND" >&2‘ DEBUG

traced_func() {

echo "hello from traced_func"

}

declare -t traced_func

traced_func

If you don’t use DEBUG traps, you can usually ignore -t entirely.

Portability: Linux Bash vs macOS Bash vs /bin/sh

Since the topic here is Linux, it’s worth making the environment assumptions explicit:

  • declare is a Bash feature (also present as typeset in other shells).
  • #!/bin/sh scripts should not rely on declare (many systems link sh to dash or BusyBox).
  • Many advanced flags (-A, -g, -l, -n, -u) require Bash 4+ or newer.

If you want your script to use these reliably on Linux, I recommend:

1) Use a Bash shebang:

#!/usr/bin/env bash

2) Fail early if not in Bash or version is too old:

[[ -n "${BASH_VERSINFO[*]-}" ]] || { echo ‘Requires bash‘ >&2; exit 2; }

# Example: require Bash 4+ for associative arrays

(( BASH_VERSINFO[0] >= 4 )) || { echo ‘Requires Bash 4+‘ >&2; exit 2; }

That may feel strict, but it saves you from mysterious runtime errors when a script ends up executed by the wrong shell.

Patterns I use for “script as a small program”

Once a Bash script grows, the best thing you can do is make state explicit and well-scoped.

Pattern 1: read-only globals after parsing

I like to parse arguments into globals (or a config map), validate, then freeze.

#!/usr/bin/env bash

set -euo pipefail

declare DEPLOY_ENV=‘staging‘

declare -i MAX_PARALLEL=4

parse_args() {

while (( $# )); do

case "$1" in

–env) shift; declare -g DEPLOY_ENV="$1" ;;

–max-parallel) shift; declare -g MAX_PARALLEL="$1" ;;

*) echo "unknown arg: $1" >&2; exit 2 ;;

esac

shift

done

}

parse_args "$@"

# validate types/values

[[ "$DEPLOY_ENV" =~ ^(dev

staging

prod)$ ]]| { echo ‘bad env‘ >&2; exit 2; }

(( MAXPARALLEL >= 1 && MAXPARALLEL &2; exit 2; }

# freeze

declare -r DEPLOYENV MAXPARALLEL

Now the rest of the script can assume config won’t change.

Pattern 2: maps for option dispatch (associative arrays)

If you have environment-to-endpoint or service-to-port mappings, associative arrays are both safer and more readable than chains of case statements.

declare -A endpointbyenv=(

[dev]=‘https://api.dev.internal‘

[staging]=‘https://api.staging.internal‘

[prod]=‘https://api.internal‘

)

apiurl="${endpointbyenv[$DEPLOYENV]}"

If the key is missing, you can handle it explicitly:

if [[ -z "${endpointbyenv[$DEPLOY_ENV]-}" ]]; then

echo "unknown DEPLOYENV: $DEPLOYENV" >&2

exit 2

fi

Pattern 3: pass-by-reference helpers for reuse

Any time I find myself wanting a function to “return” multiple values, namerefs are the cleanest option.

Example: parse --key=value into key and value.

parseflagkv() {

declare -n out_k="$1"

declare -n out_v="$2"

declare arg="$3"

out_k="${arg%%=*}"

out_v="${arg#*=}"

# strip leading "–" if present

outk="${outk#–}"

}

declare k v

parseflagkv k v ‘–env=staging‘

echo "k=$k v=$v"

Common pitfalls specific to declare (the ones that bite late)

Here are a few more “practical scars” beyond the earlier list.

1) declare output is reusable, but don’t eval it blindly

declare -p prints a command-like representation. That’s useful for humans, logs, and debugging—but I avoid feeding it back into eval in production code.

If you need serialization, write a small, explicit encoder (for example: print key=value pairs with proper escaping) rather than building a parser around declare output.

2) Don’t confuse “local variable” with “local attribute change”

Inside a function, declare -x var=value doesn’t just set the variable; it also sets the export attribute on that variable (local or global depending on how it’s declared).

If you intend to export something only for one command, I often prefer a subshell to keep leakage contained:

(

declare -x API_TOKEN="$token"

call_api

)

After the subshell ends, the parent shell environment is unchanged.

3) Local arrays are great, but be deliberate about returning them

You can’t “return an array” in Bash the way you might in Python.

My go-to options are:

  • print values to stdout and capture them (works best when values have no newlines)
  • accept a nameref to an output array

Example: fill an output array by reference:

list_targets() {

declare -n out="$1"

out=("api" "worker" "scheduler")

}

declare -a targets

list_targets targets

printf ‘%s\n‘ "${targets[@]}"

Troubleshooting checklist: when declare reveals the bug

If something is “mysteriously wrong” with a variable, this is the checklist I run.

1) Print it with attributes:

declare -p var 2>/dev/null || echo ‘var is unset‘

2) Check scope by printing inside the function that sets it and again outside.

3) If it’s numeric, confirm whether -i is set (and whether that’s appropriate).

4) If subprocesses behave differently, check whether the variable is exported (-x).

5) If arrays behave weirdly, print the full declaration and the length:

declare -p arr

echo "len=${#arr[@]}"

6) If keys are missing in an associative array:

for k in "${!map[@]}"; do echo "key=$k"; done

This is basic, but it’s fast and it ends arguments in code reviews.

Summary: how I’d teach declare to my past self

If you only take a few habits from this:

  • Use declare -p when debugging variable weirdness.
  • Use declare -r for config constants and freeze validated config.
  • Use declare -i only for real arithmetic, not numeric-looking IDs.
  • Use declare -a/-A to avoid string parsing.
  • Use declare -g when a function must intentionally set a global.
  • Use declare -n instead of eval for pass-by-reference.

That’s the difference between “Bash as glue” and “Bash as a reliable little language.”

Scroll to Top