As a seasoned Linux system administrator, ensuring commands execute successfully is crucial for automating tasks and building robust scripts. When a command runs in Bash, an exit status is returned indicating whether it completed successfully. There are two main methods for checking if a command succeeded in Bash – using an if statement to evaluate the exit code, or checking the special $? variable.
Why Check If a Command Succeeded?
Here are some key reasons you may want to check if a previous command succeeded in Bash:
-
Script robustness – Checking command success allows you to handle failures gracefully in scripts through control flow. This makes scripts more robust and stable.
-
Automating workflows – In automation scripts, you often chain together sequences of commands. Verifying each step works is important before executing subsequent steps.
-
Conditional execution – You can conditionally execute commands based on success or failure of previous commands. This is helpful for triggering recovery workflows or rollback procedures on failures.
-
Troubleshooting errors – Checking a command‘s exit status makes diagnosing errors easier when scripts don‘t work as expected.
Using an If Statement to Check Command Success
The most straightforward way to check a command‘s success status is to use an if statement. The exit code of the last executed command is evaluated in the expression.
Here is an example script showing this approach:
#!/bin/bash
# Delete files command
rm test.txt
# Check if delete succeeded
if [ $? -eq 0 ]; then
echo "Successfully deleted file"
else
echo "Delete failed" >&2
fi
Breaking this down:
rm test.txtattempts to delete a file called test.txtif [ $? -eq 0 ]checks if the last exit code$?equals 0. 0 means success.- The
thenblock runs if delete succeeded. It prints a success message. - The
elseblock handles the failure case. It prints an error to stderr.
When this script runs, it will either print "Successfully deleted file" if test.txt existed and was deleted, or "Delete failed" if the file did not exist or could not be deleted.
This approach provides a clean way to branch based on success or failure of any preceding command.
One caveat is that you must check $? immediately after the command you want to verify, before executing any other commands. This is because $? gets overwritten after each command.
Checking Command Success with the $? Variable
In Bash, the special $? variable always contains the exit status of the most recently executed command. You can view documentation on $? by running help $?.
Here is an example using $? to check a command‘s success:
#!/bin/bash
# Run command
ping -c3 google.com
# Check exit code
if [ $? -eq 0 ]; then
echo "Ping succeeded"
else
echo "Ping failed" >&2
fi
This runs the ping command to send 3 packets to google.com. It then checks if $? is 0 to see if ping succeeded.
The exit code $? will be:
- 0 – Command successful
- 1-255 – Failure with custom exit code
So comparing $? to 0 checks for success, while any non-zero value indicates failure.
An alternative is using the boolean expressions $? -eq 0 or $? -ne 0 to explicitly check for true or false.
Some advantages of using $? for checking command success:
- No need to evaluate command directly in if statement
- Can be checked whenever needed after command runs
- Handles all underlying failure codes
The main drawback is $? gets overwritten by every command, so check it promptly before executing others.
Handling Different Failure Codes
Sometimes a command may return different non-zero exit codes to indicate specific failure types.
For example scp, the secure copy command, uses these common status codes:
- 1 – General errors
- 2 – Protocol mismatch
- 127 – Command not found
To handle these cases individually, you can check for specific $? values:
#!/bin/bash
# Secure copy file
scp file.txt server:/path
# Check $? for success
if [ $? -eq 0 ]; then
echo "SCP succeeded"
# Check for command not found
elif [ $? -eq 127 ]; then
echo "scp command not installed"
# General error
elif [ $? -eq 1 ]; then
echo "Unknown scp error"
fi
This demonstrates an elif cascade to check the exact exit code. The same idea works for all commands that return meaningful status codes.
Functions for Encapsulating Success Checks
Encapsulating success checking into reusable functions is a best practice. This keeps your script logic clean and simple.
Here is an example:
#!/bin/bash
# Function returns true(0) if ping succeeds
function test_connectivity {
ping -c1 $1 >/dev/null
# Check for success
if [ $? -eq 0 ]; then
return 0
else
return 1
fi
}
# Test connectivity
test_connectivity google.com
# Check result
if [ $? -eq 0 ]; then
echo "Ping succeeded"
else
echo "Ping failed" >&2
exit 1
fi
The test_connectivity function handles the actual ping and success checking, then returns an appropriate code. This gets assigned to $? when called, letting the main script simply check $? to see the result.
This demonstrates clean separation of concerns – the function worries about the internals of running and validating a command, while the main script focuses on control flow and logic.
Executing Commands Based on Success or Failure
A common pattern is continuing script execution if a command succeeds, but terminating on failure. For example, consider this startup script flow:
- Update apt repository metadata
- Install needed packages
- Start services
Here package installs depend on the apt update, and services depend on packages. So later steps should only run if earlier ones succeed.
Using an error handling function simplifies the logic:
#!/bin/bash
# Function to check a command status
check_status() {
# Command passed as $1
"$@"
# Check exit status
if [ $? -ne 0 ]; then
echo "ERROR: $1 failed" >&2
exit 1
fi
}
# Run commands, checking status
check_status apt update
check_status apt install nginx git -y
check_status systemctl start nginx
# Notify of successful startup
echo "All services started!"
Now each command gets checked automatically, terminating the script early on any failures. The main logic stays clean and readable.
Checking Background Command Completion
Sometimes you spawn long-running commands in the background using &. How do you check if a background process has completed successfully?
The method is to save the background process PID, then compare $? after it finishes:
#!/bin/bash
# Start long sync in background
rsync -avh /data /backup &
bg_pid=$!
# Do other work
echo "Setting up databases..."
setup_databases
# Wait for sync to finish
wait $bg_pid
# Check status
if [ $? -eq 0 ]; then
echo "Background sync succeeded"
else
echo "Background failed" >&2
fi
This allows other tasks to run concurrently until the background job completes. When you wait on the PID, the exit code gets populated to $? so you can validate it.
The same approach works for background jobs started with nohup as well.
Checking Previous Command from Script History
A lesser known trick is using the Bash history built-ins to check if previous commands succeeded even if you did not expressly capture their status when they ran.
This relies on the fc builtin, short for fix command, which can replay prior command from history.
Consider this example:
#!/bin/bash
# Runs 10 mins ago
ping google.com
echo "Doing other stuff..."
# Verify ping now
fc -s ping google.com
if [ $? -eq 0 ]; then
echo "Past ping was successful"
fi
Even though 10 minutes may have passed, fc -s re-executes the ping command in history, updating $?.
So you can check a command‘s exit status long after it originally ran.
A use case is validating steps from an existing Bash terminal history after the fact.
Integrating Success Checks in CI Pipelines
Verifying command success is a common requirement in CI/CD pipelines. For example, you may have a pipeline job like:
Stage 1
- Build code
- Run unit tests
- Push container image
Stage 2
- Deploy container
- Smoke test
The second stage deploy should only occur if the first stage build succeeded. And smoke tests validate readiness after deploy.
In Jenkins, this pattern looks like:
node {
stage(‘Build‘) {
// Build steps
sh ‘mvn package‘
// Check result
if(currentBuild.result == "SUCCESS"){
echo ‘Moving to deploy‘
}
else {
error ‘Build failed, stopping early‘
}
}
stage(‘Deploy & Test‘){
// Deploy steps
}
}
This allows downstream stages to depend on success of upstream stages. Each stage can validate previous steps as needed.
Most other CI/CD tools like Travis CI, CircleCI, GitHub Actions provide similar ways to check prior job status before proceeding. This is essential for robust pipeline workflows.
Conclusion
Checking whether a command or script step succeeded is a ubiquitous requirement in shell programming. Using an if statement to evaluate exit codes is the standard approach. Additionally, the special $? variable provides a convenient programmatic handle for command success status in Bash. Integrating simple checks into your scripts and workflows aids reliability, troubleshooting, and control flow. Both methods are easy to adopt once you understand the basics of exit codes. Mastering command success checking will level up your Bash mastery and help tame complex admin workflows.


