I still remember the first time I opened an Ubuntu terminal and felt like I’d stepped into a control room with no labels. A mouse-driven desktop gives you hints everywhere, but the shell expects you to be precise. That precision is exactly why I keep coming back to it. The terminal is where I can see the system’s truth without distractions, where scripts are born, and where tiny commands add up to real automation. If you’re new, the command line can feel like a dark hallway. If you’re experienced, it can still surprise you with sharp edges.
Below I walk through 25 basic Ubuntu commands I use constantly. I’ll show what they do, when I reach for them, and the mistakes I see most often. I’ll also thread in modern 2026 habits: AI-assisted workflows, better defaults, and small safety checks I rely on. You’ll walk away knowing how to move around, manage files, inspect data, search, handle permissions, watch the system, and install software. These are the moves I expect any developer I mentor to have ready on day one.
1) Orientation and movement: pwd, ls, cd
When I get lost, I anchor myself with three commands. Think of them like a compass, a map, and a step forward.
pwd — where am I right now?
pwd
I run this when I’m unsure of the current working directory. It prints the full path. If you’re troubleshooting a script or a CI step, this is the fastest sanity check.
Common mistake: assuming ~ is always home inside scripts. Use pwd to confirm actual paths in logs.
Practical scenario: I use pwd right before a destructive rm -r or before a tar backup. It’s one tiny command that prevents huge disasters.
ls — what’s around me?
ls
ls -l
ls -a
ls -lh
ls shows the contents of a directory. I default to ls -lh when I care about file sizes and timestamps. ls -a reveals hidden dotfiles like .env or .git.
Performance note: listing a directory with tens of thousands of files can take 50–200ms on SSDs, sometimes longer on network mounts.
Edge cases and tips:
ls -lais my quick “show everything with details” view.- If the directory is huge, I run
ls | headto avoid terminal flood. - Sorting by time (
ls -lt) helps me find the newest file quickly.
cd — move through folders
cd /var/log
cd ..
cd -
cd changes your working directory. cd .. goes up a level; cd - jumps back to the previous directory, which is a huge time saver.
Common mistake: forgetting you’re inside a folder with the same name as another path and deleting in the wrong place. I often run pwd before any destructive command.
Practical scenario: When I’m working in two places (say ~/projects/api and ~/projects/web), I use cd - like a toggle. It keeps my flow uninterrupted.
2) Create, copy, move, remove: mkdir, touch, cp, mv, rm
This is the core file-management toolkit. I treat it like a workshop: build, label, move, discard.
mkdir — make directories
mkdir logs
mkdir -p projects/client-a/api
-p creates parent directories if they don’t exist. I use it in scripts to avoid failures when intermediate paths are missing.
Common mistake: creating folders with spaces and forgetting to quote them later. If you must have spaces, use quotes every time.
Scenario: When starting a new project, I’ll scaffold a tree in one line:
mkdir -p src/{api,web,shared} scripts docs
That one command saves a minute of clicking.
touch — create an empty file
touch .env
touch is a fast way to create a file or update its timestamp. I use it to scaffold project files before editing.
Common mistake: thinking touch adds content. It does not. It only creates the file if missing or updates timestamps.
cp — copy files or folders
cp config.example.yml config.yml
cp -r templates/ backups/templates-2026-01-27/
cp duplicates files. Use -r for directories. I prefer cp -a for backups because it preserves timestamps and permissions.
Common mistake: copying directories without -r and thinking it worked. Always check ls afterward.
Alternative approach: If you want progress for large copies, use rsync -a instead of cp -a. It’s not part of the 25 basics, but it’s the tool I reach for when the copy matters.
mv — move or rename
mv draft.md posts/ubuntu-basics.md
mv moves or renames. I use it constantly to version files with clear names.
Edge case: moving across file systems can be slower than you expect because it becomes a copy + delete.
Practical scenario: After generating a report, I do:
mv report.txt reports/2026-01-27-system-audit.txt
Naming files with dates makes diffing and tracking a breeze.
rm — remove (with care)
rm old.log
rm -r tmp/
rm -i important.txt
rm deletes files. It does not go to the trash. -r removes directories; -i asks for confirmation. I use -i when I’m tired or working in production.
Common mistake: rm -r * in the wrong folder. I avoid this by running pwd and ls first.
Modern safety habit: I alias rm to rm -i on my personal machines. I don’t do it on servers because scripts might rely on the raw behavior.
3) Read and inspect files: cat, less, head, tail
Reading files well is a superpower. I treat these commands like different lenses on the same document.
cat — print the whole file
cat README.md
cat dumps the file to the terminal. It’s great for small files like config snippets.
Performance note: reading a 100MB file can flood your terminal. For huge files, use less.
less — page through a file
less /var/log/syslog
less lets you scroll and search (/keyword). I use it for logs and long configs. Quit with q.
Pro tip: less -S stops line wrapping, which is cleaner for long log lines.
head — show the start
head -n 20 data.csv
head shows the first lines. I use it when previewing large datasets.
tail — show the end (and follow)
tail -n 50 app.log
tail -f app.log
tail -f follows a file as it grows. It’s perfect for watching logs during a deploy.
Edge case: log rotation can break tail -f. If logs rotate often, use tail -F instead.
Practical scenario: I often combine tail -f with another terminal where I’m triggering requests, so I can watch logs in real time.
4) Search and filter: grep, find
These two commands are my “flashlight” and “metal detector.” They help me locate content fast.
grep — search inside files
grep -n "ERROR" app.log
grep -R "DATABASE_URL" .
grep finds matching lines. I rely on -n for line numbers and -R for recursive search.
Common mistake: forgetting quotes around patterns with special characters. If your search includes * or ?, quote it.
Practical scenarios:
- I use
grep -n "TODO" -R src/during code reviews to find unfinished work. - When debugging, I run
grep -n "Request ID" app.logto trace a specific request.
Performance considerations: Recursive grep on large directories can be slow. I usually scope it to src/ or config/ rather than ..
find — search by name, type, or time
find . -name "*.log"
find /var -type f -mtime -7
find locates files by rules: name patterns, file type, modified time. I use -mtime -7 to find files changed in the last week.
Performance note: on huge directories, find can take 200–800ms or more. Narrow the path if you can.
Practical scenario: cleaning up old build artifacts:
find ./dist -type f -mtime +14 -delete
I rarely use -delete without a dry run first. I usually run the same find command without -delete to preview what would be removed.
5) Permissions and ownership: chmod, chown, sudo
This is the part that confuses most new users. I explain it like a locked house: permissions control who can enter and what they can do.
chmod — change permissions
chmod 644 config.yml
chmod +x deploy.sh
chmod sets file permissions. I use +x to make scripts executable. Numeric modes like 644 are common: owner can read/write, everyone else can read.
Common mistake: chmod 777 everywhere. It works, but it’s a security hole. I only use it for quick throwaway experiments.
Practical scenario: After cloning a repo, I often need to do:
chmod +x scripts/setup.sh
chown — change ownership
sudo chown -R ubuntu:ubuntu /var/www/app
chown changes the owner and group. You typically need sudo for system paths.
Edge case: changing ownership of mounted volumes can fail if the filesystem doesn’t support Unix ownership. If it fails, check mount options.
Practical scenario: If a Docker volume got created as root, I’ll fix ownership so my normal user can edit files without sudo.
sudo — run as admin
sudo systemctl restart nginx
sudo runs a command with elevated rights. I treat it like a sharp tool: use it only when needed.
Common mistake: running everything with sudo out of habit. This can create files you can’t edit later without sudo.
Safety habit: If a command only fails with “permission denied,” I run it with sudo. I don’t start with sudo unless I have to.
6) Storage and archives: df, du, tar
I check space early so I don’t waste time on errors later. Think of df as your fuel gauge and du as your per-folder meter.
df — disk space by filesystem
df -h
df -h shows mounted filesystems and free space in human-friendly units. I check this before large downloads or builds.
Practical scenario: If a build fails with “no space left on device,” I immediately run df -h to see which mount is full.
du — disk usage by directory
du -sh .
du -sh ~/projects/*
du shows how much space a directory uses. -s summarizes; -h keeps it readable.
Performance note: du can take 100–500ms or more for large directories with many small files.
Trick: du -sh * | sort -h gives me a quick “largest folder” view.
tar — bundle files for backup
tar -czf logs-2026-01-27.tar.gz /var/log
tar packages files into an archive. I use -c to create, -z for gzip compression, and -f to name the archive.
Common mistake: reversing the order of flags and filename. The archive name must follow -f.
Practical scenario: Before a system upgrade, I archive configs:
tar -czf configs-backup.tar.gz /etc/nginx /etc/systemd
This gives me a clean rollback story if something breaks.
7) Processes, services, and packages: ps, top, kill, systemctl, apt
This is how I observe the system in motion and keep it healthy.
ps — list running processes
ps aux
ps aux shows all processes. I scan for high CPU or suspicious entries. Combine with grep to filter.
Example:
ps aux | grep nginx
top — live process view
top
top updates in real time. It’s perfect for quick diagnostics. If a process pegs CPU for more than 10–30 seconds, I investigate.
Practical tip: Press P in top to sort by CPU, or M to sort by memory.
kill — stop a process
kill 12345
kill -9 12345
kill sends a signal to a process. I try the normal signal first; -9 is a last resort when the process won’t respond.
Common mistake: killing the wrong PID. Always confirm with ps or top first.
Safety habit: If I’m unsure, I run ps -p 12345 -o pid,cmd to see what I’m about to kill.
systemctl — manage services
sudo systemctl status docker
sudo systemctl restart ssh
systemctl controls system services. I use status before restart so I can see the current state.
Edge case: some containers or WSL-style environments don’t use systemd. In that case, systemctl may fail and you’ll need a different service manager.
Practical scenario: If a service is flapping, I’ll check journalctl -u service-name (not part of the 25 basics, but the next step).
apt — install and update software
sudo apt update
sudo apt install ripgrep
sudo apt upgrade
apt is Ubuntu’s package manager. I always run apt update before installing to avoid stale package indexes.
Modern note: in 2026, I still rely on apt, but I often pair it with container images or dev environments to keep host systems clean.
Practical scenario: I use apt install for system-wide utilities (like curl, git, ripgrep) and keep project dependencies inside the project’s own toolchain (like npm or pip).
8) The big picture: how the 25 commands map to real workflows
I like to think in workflows rather than single commands. The terminal is a chain of small steps, and these 25 commands are the links.
Workflow 1: Debugging a server error
pwdandlsto orient inside logs or a project folder.tail -fto watch logs as the error happens.grep -n "ERROR"to find the root message.ps auxortopto confirm the service is running.systemctl statusandsystemctl restartif the service is stuck.
Workflow 2: Cleaning up disk space
df -hto see what is full.du -sh *to locate the biggest directories.findto locate old logs or build artifacts.rm -r(carefully) after apwdcheck.
Workflow 3: Preparing a quick backup
mkdir -p backups/YYYY-MM-DDfor a clean target.cp -aortar -czfto archive configs.ls -lhto confirm size.
9) Subtle edge cases that trip people up
This is where “basic commands” stop being basic. These are the small issues I see all the time in real-world systems.
Symbolic links and cp/rm
If you rm -r a directory that contains symlinks, you might delete the symlink but not the target. This is usually safe, but it can surprise people who expected the target to be removed too.
Hidden files and ls
New users often forget that ls hides dotfiles. That’s why ls -a is essential when you’re debugging config issues.
chmod on directories
A file with 644 is readable by everyone; a directory with 644 is unusable because it needs execute permissions to enter. For directories, I usually set 755 or 750 depending on who should access it.
find on massive trees
Running find / can take seconds and create noise by scanning system paths. I almost always scope it to a project or /var.
10) Practical comparisons: traditional vs modern habits
I’ve evolved how I use these commands over time. The commands are the same, but my habits changed as environments and teams changed.
Traditional approach
lsand manual browsing instead offindcatfor everything- frequent
sudoeven when unnecessary - loose permissions like
chmod 777for quick fixes
Modern approach (what I recommend in 2026)
findandgrepfor fast, repeatable discoverylessfor large files and logs- minimal
sudo, only for system paths - safer permissions (
644,755,+x) and least privilege
This shift reduces mistakes and makes your workflow more predictable, especially on shared servers or production boxes.
11) AI-assisted workflows without losing fundamentals
I use AI tools daily, but I still treat these commands as the foundation. AI can generate a find command, but you need to know if it’s safe.
Here’s how I integrate AI without losing control:
- I ask for command suggestions, then sanity-check with
pwdandlsfirst. - I prefer suggestions that show a dry-run option (
findwithout-delete). - I keep a short “safety checklist” in my head: location, permissions, scope.
If you follow that habit, AI becomes a shortcut, not a liability.
12) Common pitfalls and how I avoid them
- I avoid deleting the wrong folder by running
pwdandlsright before anyrm -r. - I avoid permission pain by using
sudoonly for admin tasks, not for routine file edits. - I avoid giant terminal spam by using
lessinstead ofcatfor large files. - I avoid long searches by narrowing
findto a specific path instead of scanning/. - I avoid broken scripts by using absolute paths when running from cron or CI.
13) When to use these commands — and when not to
- Use
rmfor files you truly want gone; don’t use it as a substitute for a trash can. - Use
cpfor duplication, but if you want to preserve metadata, add-a. - Use
grepfor quick scans; for complex parsing, write a short script instead. - Use
systemctlfor long-running services; don’t use it to manage one-off tasks. - Use
aptfor stable system packages; for app-specific dependencies, prefer project-level tooling likepip,npm, orcargo.
14) Field-tested micro-habits that save hours
These tiny habits are boring, but they cut down mistakes and speed up my work.
- I keep one terminal tab in my home directory as a “safe zone.”
- I use
cd -constantly when switching contexts. - I run
ls -lhbefore backups so I can estimate archive sizes. - I use
tail -Fon servers where logs rotate often. - I avoid
sudoin my shell history by default so I don’t get lazy.
15) Additional practical examples for each command group
Sometimes it helps to see a compact “recipe” per group. Here’s how I apply them when onboarding new teammates.
Movement example
pwd
ls -lh
cd projects
ls
cd -
This sequence confirms where you are, shows what’s around you, and teaches the “back jump” with cd -.
File management example
mkdir -p drafts/ubuntu
cd drafts/ubuntu
touch outline.md
cp outline.md outline.backup.md
mv outline.md outline-v1.md
It’s a simple workflow that mirrors how we work on docs in real life.
Reading and inspection example
head -n 5 data.csv
less data.csv
This is how I teach “preview first, then explore.”
Search example
grep -n "timeout" -R config/
find . -name "*.log"
This shows pattern matching and file targeting in two minutes.
Permissions example
chmod +x scripts/build.sh
sudo chown -R $USER:$USER ./local-cache
This is a common fix for “why can’t I run or edit this?”
System example
ps aux | grep node
sudo systemctl status nginx
It teaches the difference between processes and services.
16) Performance considerations (realistic ranges)
I avoid giving exact numbers because hardware varies, but here are the ranges I actually see in practice:
lsin a huge directory: 50–500ms on SSD, longer on networked volumes.findacross large trees: 200ms–2s depending on scope.du -shon a deep project: 100–800ms or more with tiny files.grep -Ron a repo: 100–1000ms depending on file count.
Knowing these ranges helps me decide whether to wait, narrow scope, or switch to a targeted command.
17) The “starter pack” command memory tricks I teach
If you’re learning from scratch, memorize them in clusters rather than lists:
- Navigation:
pwd,ls,cd - File actions:
mkdir,touch,cp,mv,rm - File reading:
cat,less,head,tail - Search:
grep,find - Permissions:
chmod,chown,sudo - Storage and archives:
df,du,tar - System management:
ps,top,kill,systemctl,apt
That’s 25 commands exactly, grouped in a way your brain can actually remember.
18) Why these 25 commands still matter in 2026
I hear people say “the terminal is old,” but every new tool still depends on these basics. Containers, cloud VMs, CI pipelines, and automation scripts all boil down to the same primitives: move, list, read, search, copy, and manage processes. If you know these commands, you’re never locked out of a system.
Even in a world of GUIs and AI, the terminal remains the fastest, most predictable interface for infrastructure. That’s why I teach it first.
19) 2026 workflow tips I actually use
- I keep a local AI assistant hooked to my shell history. It helps me recall exact
findortarflags without guessing. - I use small aliases like
ll=‘ls -lh‘and..=‘cd ..‘, but I still teach the full commands so nothing is hidden. - I record quick terminal demos in my team docs so new hires can replay them, not just read them.
- I keep a “safe commands” note in my shell config with my most used patterns.
- I test risky commands on a temp folder before I run them on production data.
20) Closing mindset: from commands to confidence
Think of the shell like a kitchen. pwd tells you which room you’re in, ls shows the ingredients, and cd moves you between stations. cp and mv are your prep steps, grep is your recipe search, and sudo is the locked pantry key you only grab when needed.
The more you practice these basics, the less you think about them. That’s when the terminal stops feeling like a black box and starts feeling like a workshop built for you.
You now have a complete starter set of 25 commands. My next step, if I were you, would be to pick three that feel awkward and use them daily for a week. For example, spend a week using less instead of cat for big files, or find instead of manually browsing. You’ll feel the difference fast. As you grow, add small guardrails: rm -i when you’re tired, sudo only when necessary, and pwd before destructive commands. These habits take minutes to build and save hours of recovery later. If you want to go further, start writing tiny scripts that chain these commands together, like find + grep to locate config problems, or tar + df to prepare clean backups. That’s how you move from knowing commands to using them like a pro.
Expansion Strategy
Add new sections or deepen existing ones with:
- Deeper code examples: More complete, real-world implementations
- Edge cases: What breaks and how to handle it
- Practical scenarios: When to use vs when NOT to use
- Performance considerations: Before/after comparisons (use ranges, not exact numbers)
- Common pitfalls: Mistakes developers make and how to avoid them
- Alternative approaches: Different ways to solve the same problem
If Relevant to Topic
- Modern tooling and AI-assisted workflows (for infrastructure/framework topics)
- Comparison tables for Traditional vs Modern approaches
- Production considerations: deployment, monitoring, scaling


