I’ve hit the Colab “why is this breaking today?” moment more times than I can count. You open a notebook, run code that used to work, and suddenly a library throws a version error. The root cause is often simple: Colab updated the runtime and your project expects an older Python. When that happens, you don’t need a lecture—you need a reliable way to downgrade without wrecking the environment you’re standing on.
This post walks through what I actually do when I must run Python 3.7 or 3.8 in Colab. I’ll show you how to check the runtime, install another interpreter, wire it so your code uses it, and keep dependencies consistent. I’ll also call out the limits of Colab’s notebook kernel, because “downgrade” means different things depending on whether you need the notebook itself to run under an older version or you just need to execute scripts with that version. Think of it like bringing a spare engine to a racetrack: you can run the car on it, but swapping the entire chassis takes more planning.
When downgrading is the right move
I downgrade in Colab for three concrete reasons.
First, library compatibility. Some older packages—especially research code or niche ML toolkits—lag behind the newest Python release. If a library hard-pins to Python 3.8, you’re not going to reason your way around it. You’ll need that interpreter.
Second, legacy projects. I still see production codebases that are frozen at 3.7 or 3.8 because of dependencies that were vetted years ago. If you’re fixing a regression or writing a patch, it’s safer to run the environment the code expects than to “hope” the upgrade won’t alter behavior.
Third, parity with teammates or competition rules. If a Kaggle or private competition specifies Python 3.7, it’s not optional. You want local testing to match the target.
I avoid downgrading when the real issue is a single library that could be upgraded safely. In my experience, fixing dependency pins is often faster than changing the whole runtime. But if you’re blocked by strict version gates, the downgrade is the cleanest path.
What “downgrade” really means in Colab
Colab gives you a managed Jupyter kernel. The kernel is the Python interpreter that runs your notebook cells. It’s installed and managed by Google, and you can’t truly replace it the way you would on a local machine.
So when you “downgrade,” there are two possible goals:
1) Run an older interpreter side-by-side and execute code through it (most reliable).
2) Replace the system Python that the kernel points to (possible, but fragile).
I almost always choose #1. It’s like running a script with a different engine while keeping the dashboard and controls intact. You can still use notebook cells, but when you need the older Python, you launch it directly.
Here’s a quick comparison to help you pick the right approach.
Traditional (system swap)
—
Change /usr/bin/python3 or update-alternatives
Fragile; may break Colab tools
Only if you truly need the kernel version changed
Often requires runtime restart
If you need notebook cells to run under the older Python, you can try a system swap, but I treat it as a last resort and expect breakage. For most cases, an isolated interpreter is enough.
Step 1: check the current runtime
Start by checking the Python version that Colab is using in the kernel. I run this in a notebook cell:
!python --version
I avoid hard-coding what “current” is because Colab updates regularly. The result tells you the baseline.
If you need a specific version, write it down. I also recommend checking the OS release because Ubuntu package names can vary:
!lsb_release -a
That output helps when you install a different Python version via apt.
Step 2: install the older Python interpreter
For most Python 3.x downgrades, apt-get is the simplest option. Here’s how I install Python 3.8 in Colab. Adjust the version number to match your target:
!sudo apt-get update -y
!sudo apt-get install -y python3.8 python3.8-venv python3.8-dev
Why install the -venv and -dev packages? -venv gives you venv, which I use to isolate dependencies. -dev avoids build issues for packages with C extensions.
Verify the install:
!python3.8 --version
If that prints the expected version, you’re ready to build a clean environment around it.
Step 3: create an isolated environment for the downgrade
I recommend a dedicated virtual environment tied to the older interpreter. This is the safest path and keeps Colab’s system Python untouched.
!python3.8 -m venv /content/py38
! /content/py38/bin/python -m pip install --upgrade pip
Now, whenever you need Python 3.8, run it directly:
! /content/py38/bin/python --version
To install dependencies for that environment, use the venv’s Python:
! /content/py38/bin/python -m pip install numpy==1.24.4 pandas==2.0.3
I keep a tiny wrapper in the notebook so I don’t forget which interpreter I’m using. This cell uses subprocess to run code with the older Python:
import subprocess
from textwrap import dedent
code = dedent("""
import sys
import numpy as np
print(sys.version)
print("numpy", np.version)
""")
result = subprocess.run(
["/content/py38/bin/python", "-c", code],
capture_output=True,
text=True,
check=True,
)
print(result.stdout)
That keeps the notebook kernel stable while you execute real work in the older runtime.
If you must change the notebook’s Python
Sometimes you need notebook cells themselves to run under the older Python—for example, when a library must hook into the kernel. This is the risky path, but it’s still possible to try.
I use update-alternatives as a controlled switch. Note: this can break Colab’s preinstalled packages or even the notebook interface if system tools expect the default Python.
!sudo update-alternatives --install /usr/bin/python3 python3 /usr/bin/python3.8 2
!sudo update-alternatives --install /usr/bin/python3 python3 /usr/bin/python3.10 1
!sudo update-alternatives --config python3
Pick the version you want from the menu. Then restart the runtime from the Colab menu. After restart, check the version again:
!python --version
If the notebook stops working, revert to the default Python. I treat this as a short-lived experiment, not a long-term setup.
Keep dependencies consistent across sessions
Colab resets the filesystem when the runtime ends. If you want a repeatable downgrade, you need a small bootstrap cell.
I keep a single “setup” cell at the top of my notebooks:
!sudo apt-get update -y
!sudo apt-get install -y python3.8 python3.8-venv python3.8-dev
!python3.8 -m venv /content/py38
! /content/py38/bin/python -m pip install --upgrade pip
! /content/py38/bin/python -m pip install -r /content/requirements-py38.txt
Then I store /content/requirements-py38.txt in my Drive or recreate it from a pip freeze output. When I need reproducibility, I pin everything explicitly:
! /content/py38/bin/python -m pip freeze > /content/requirements-py38.txt
In 2026, I often pair this with uv for faster installs. You can install it with pip and then run it via the older interpreter:
! /content/py38/bin/python -m pip install uv
! /content/py38/bin/python -m uv pip install -r /content/requirements-py38.txt
That keeps install time down, especially for larger stacks.
Common mistakes I see (and how to avoid them)
Here are the pitfalls I keep running into, plus the fix I use each time:
- Forgetting that notebook cells still run under the default kernel. If you install Python 3.8 but call
importin a cell, you’re still on the system version. Use/content/py38/bin/pythonfor scripts orsubprocess. - Installing packages with the wrong pip. If you run
!pip install, you’re installing for the kernel Python, not your downgraded interpreter. Always usepython -m pipwith the exact binary you want. - Breaking the runtime with
update-alternatives. If you swap system Python, keep a path to the original and be ready to reset the runtime. - Skipping
python3.x-venvduring install. That leads to “No module named venv” when you create an environment. - Assuming Colab will keep your changes. Everything in
/contentdisappears when the runtime resets, so put your setup in a top cell or in a notebook template.
A quick mental model helps: treat the Colab kernel as a separate machine. You can run other Python versions next to it, but you can’t casually replace its engine without side effects.
When you should not downgrade
I don’t downgrade just because a package complains. If the package is actively maintained, upgrading it is usually safer and faster than changing the runtime. The exception is research code or archived libraries that are never going to support new Python.
I also avoid downgrading when the project depends on modern features like typing.Self, match statements, or new asyncio behavior. Backporting those features into older versions is often worse than keeping Python current.
If you need an older Python for full-kernel work, you might be better off outside Colab. A local virtual environment, a Docker container, or a hosted notebook service with kernel control will save time. Colab is fantastic for fast experiments, but it’s not a full system VM.
Performance and reproducibility notes
Downgrading doesn’t always affect speed, but it can change behavior. Older Python versions typically have slower startup and older standard library implementations. I’ve seen cold starts for external interpreter calls add 20–80ms per invocation, which is fine for batch tasks but noticeable for tight loops.
For repeatability, pin your dependencies. I prefer explicit version ranges in a requirements-py38.txt file. If you’re training models, record the Python version and package list in your results metadata. It’s the small habit that saves you hours later.
If you need GPU, be careful with CUDA-bound packages. Some versions of TensorFlow or PyTorch only support certain Python ranges. I check the library’s compatibility chart before pinning versions, then I align the Python downgrade to match. It’s like matching a plug to a socket—force it and you’ll get sparks.
Where I land on downgrading in Colab
When I need an older Python in Colab, I install a second interpreter and run it side-by-side. That approach keeps the notebook stable and still lets me execute real work in the required version. I treat system-level swaps as a short-term experiment, not a daily workflow. It’s the difference between adding a spare tool to your kit and rebuilding the whole toolbox.
If you take only one action, make it this: always call the interpreter you intend to use, and always install packages with that interpreter’s pip. It sounds obvious, but it’s the root cause of most “why is this still failing?” moments.
From here, I’d set up a repeatable bootstrap cell, pin dependencies, and keep a small test script that prints sys.version plus a few critical package versions. That gives you a quick health check after every runtime reset. If you hit a case where Colab’s kernel truly must be older, consider moving that project to a local environment or a managed notebook that lets you pick the kernel version explicitly. In my experience, that shift is often faster than fighting Colab’s defaults.
Downgrading isn’t glamorous, but it’s a practical skill. When you treat it as a controlled, isolated setup instead of a full system replacement, it stays reliable—and your future self will thank you.
Deeper example: a complete bootstrap cell I actually reuse
When I know I’ll be bouncing between sessions, I keep a reusable bootstrap cell that does four things: installs the old interpreter, builds a venv, installs pinned deps, and runs a quick smoke test. This turns “downgrade” into a single, repeatable action.
Here’s a fuller version that I drop into the top of a notebook. It’s longer, but it’s deterministic, and that’s what I want when Colab resets:
# 1) Install the old interpreter
!sudo apt-get update -y
!sudo apt-get install -y python3.8 python3.8-venv python3.8-dev
2) Create or recreate the venv
!rm -rf /content/py38
!python3.8 -m venv /content/py38
! /content/py38/bin/python -m pip install --upgrade pip
3) Install dependencies
! /content/py38/bin/python -m pip install -r /content/requirements-py38.txt
4) Quick smoke test
! /content/py38/bin/python - << 'PY'
import sys
import numpy
print(‘Python:‘, sys.version)
print(‘NumPy:‘, numpy.version)
PY
I like the in-cell python - << 'PY' style for two reasons: it’s fast, and it removes ambiguity about which interpreter I’m using. You can also swap the smoke test for a tiny domain-specific check—like importing torch and printing CUDA availability.
If you’re worried about reinstalling every time, you can add a simple “exists” guard around the venv path. But I usually go with brute-force re-creation, because it reduces the chance of a half-broken environment.
Deeper example: running notebooks code under the old interpreter
When I’m using the side-by-side approach, I still need a way to run small pieces of code and get the results back into the notebook. I don’t want to keep jumping into shell cells, and I don’t want a maze of ad-hoc commands.
Here’s a helper function I keep in the notebook kernel that lets me execute code strings through the old interpreter and read the output:
import subprocess
from textwrap import dedent
PY38 = "/content/py38/bin/python"
def run_py38(code: str) -> str:
code = dedent(code).strip()
result = subprocess.run(
[PY38, "-c", code],
capture_output=True,
text=True,
check=True,
)
return result.stdout
print(run_py38("""
import sys
print(sys.version)
"""))
This is intentionally simple: it’s a wrapper around subprocess. When I need more advanced output, I write to JSON and parse it in the kernel. Here’s a quick pattern for that:
import json
payload = run_py38("""
import json
import sys
result = {
"version": sys.version,
"answer": 6 * 7,
}
print(json.dumps(result))
""")
data = json.loads(payload)
print(data["answer"], data["version"])
This is enough for 90% of the interactive work I do. If I need to run longer scripts or notebooks, I pivot to saved .py files or external runs.
Edge case: when apt can’t find the version you want
Sometimes the Ubuntu package repositories in Colab don’t have the exact Python version you want. This happens more with older releases like 3.7, and it happens more often when the base OS is updated.
In those cases, I switch to compiling Python from source. It’s heavier, but it gives you precise control. I only do this when I can’t get the version via apt, and I accept that it takes longer.
Here’s the basic workflow. I’m showing the structure, not the exact version number. Replace 3.7.17 with whatever you need:
# Build dependencies
!sudo apt-get update -y
!sudo apt-get install -y build-essential wget libssl-dev zlib1g-dev \
libbz2-dev libreadline-dev libsqlite3-dev libffi-dev
Download and build
!wget -q https://www.python.org/ftp/python/3.7.17/Python-3.7.17.tgz
!tar -xzf Python-3.7.17.tgz
!cd Python-3.7.17 && ./configure --prefix=/opt/python3.7 --enable-optimizations
!cd Python-3.7.17 && make -j2
!cd Python-3.7.17 && sudo make install
Verify
!/opt/python3.7/bin/python3.7 --version
Then I build a venv from the compiled interpreter:
!/opt/python3.7/bin/python3.7 -m venv /content/py37
!/content/py37/bin/python -m pip install --upgrade pip
This route is slower, and you should expect 5–15 minutes of build time depending on the runtime. It’s not my first choice, but it’s a reliable fallback when apt doesn’t have what I need.
Edge case: mixing CUDA and old Python
GPU work gets tricky when you downgrade Python. Many GPU libraries have strict version matrices: Python version, CUDA version, and library version must all align.
I handle this in a two-step process:
1) Identify the library version that supports my desired Python.
2) Align that library with the CUDA runtime Colab provides.
I keep a simple checklist in my notebook:
- What Python versions does the library support?
- Which CUDA version does Colab provide on this runtime?
- Does the wheel for that library match both?
If those don’t align, I either pick a different library version or a different approach (like CPU-only runs). I’ve learned not to fight this too hard—time is better spent adjusting versions than brute-forcing incompatible wheels.
Edge case: notebook widgets and extensions
Some notebook widgets or interactive libraries depend on the kernel environment. When you run code through a side-by-side interpreter, you lose direct access to kernel extensions that expect the notebook’s Python version.
I’ve hit this with interactive visualization tools. The workaround is either:
- Use the kernel Python for visualization and the old Python for computation, or
- Export results from the old interpreter and visualize them in the kernel.
This sounds annoying, but it’s usually manageable. For example, I run a model under Python 3.8, write the results to a temp file, then load and chart them in the notebook kernel.
Practical scenario: reproducing a legacy training run
Here’s a scenario I encounter in ML projects: someone trained a model in Python 3.7 with a specific stack (say, PyTorch 1.10 and a few custom libs), and now you need to reproduce the metrics.
My steps:
1) Create a pinned requirements file from the legacy setup.
2) Install Python 3.7 and create a venv.
3) Install the pinned requirements via the 3.7 interpreter.
4) Run the training as a script with the legacy Python.
5) Import the outputs into the notebook for visualization.
This might sound like overkill, but it keeps the notebook kernel usable and isolates the legacy environment. In practice, it’s less fragile than trying to flip the kernel itself.
Practical scenario: matching a competition environment
Competitions and coding challenges often lock you to a specific Python version. If the challenge says “Python 3.7,” I treat that as a hard constraint.
In that case I do two extra things:
- I capture a
pip freezeafter installing all dependencies, so I can recreate the exact stack later. - I write a tiny validation script that prints Python and the top three libraries I rely on.
That “validation script” is my sanity check. If the output matches the expected versions, I know I’m not drifting away from the target.
Practical scenario: using an old library with a new kernel
Sometimes the whole reason you’re downgrading is one library that just won’t import. I’ve found this frequently with research repos that aren’t actively maintained.
When that’s the case, I don’t always need to run everything under the old Python. I isolate the part of the workflow that depends on the old library and keep the rest on the modern kernel.
Example approach:
- Run the old library on the legacy interpreter to produce a feature file or model checkpoint.
- Load that output into the notebook kernel for analysis, visualization, or further processing.
This keeps the “fragile” part small and limits the blast radius of the downgrade.
Alternative approach: pyenv inside Colab
Some people like using pyenv to manage Python versions. I’ve tested this in Colab, and it works, but I don’t use it often because it adds extra setup and can be brittle if the runtime changes.
If you’re already comfortable with pyenv, it can be useful when you want multiple versions and want a uniform interface. But in Colab, I usually prefer the straightforward apt approach because it’s faster and more predictable.
If you do try pyenv, keep the same principles:
- Don’t assume the kernel changes.
- Use explicit paths to the interpreter.
- Install packages with the interpreter you intend to use.
Alternative approach: running a script file via the old interpreter
Instead of pasting code into subprocess, you can write .py files and execute them with the old interpreter. This is my go-to when I have larger scripts.
Example:
code = """
import sys
import numpy as np
print(sys.version)
print(np.mean([1,2,3]))
"""
with open("/content/legacy_run.py", "w") as f:
f.write(code)
Then run it with the old interpreter:
! /content/py38/bin/python /content/legacy_run.py
This is especially helpful when you’re running a full training script or a library that needs a file entry point.
Alternative approach: Docker (only if you really need it)
Colab doesn’t let you run Docker in the way you would on a local machine, but sometimes I get asked about it. If you truly need full control of the runtime, you might be better off outside Colab entirely.
For me, Docker is a local or VM solution, not a Colab solution. It gives you true control, but it’s not the right tool inside Colab’s managed environment. I mention it here only because it’s a common question.
Performance considerations: what actually changes
People assume older Python means slower code. That’s sometimes true, but the more noticeable performance change I see in Colab is the overhead of launching an external interpreter.
Here’s what I’ve observed in practice:
- The first call to the old interpreter can add tens of milliseconds due to startup.
- Repeated calls have a smaller overhead but still cost more than in-kernel execution.
- If you run tiny functions inside
subprocessin a tight loop, performance suffers.
My workaround is simple: batch work into fewer interpreter calls. If I’m running a model or a transform, I run it in one script call, not in dozens of small subprocess calls. I treat the older interpreter as a batch runner, not a microservice.
Reproducibility checklist I actually use
When I’m serious about reproducibility, I use this checklist at the top of a Colab notebook:
- Print the kernel Python version.
- Print the legacy interpreter version.
- Print the OS release.
- Print the major library versions.
- Record a
pip freezefor the legacy environment.
Here’s a tiny helper cell that prints both Pythons and key libs:
import subprocess
print("Kernel:")
!python --version
print("Legacy:")
! /content/py38/bin/python --version
print("Legacy libs:")
! /content/py38/bin/python - << 'PY'
import numpy, pandas, sys
print(‘Python‘, sys.version)
print(‘NumPy‘, numpy.version)
print(‘Pandas‘, pandas.version)
PY
This looks simple, but it saves me from subtle mismatches, especially when I return to a notebook weeks later.
Troubleshooting: my go-to fixes
Here are the fixes I reach for most when the downgrade isn’t working as expected:
- If
python3.8 -m venvfails, I check whetherpython3.8-venvis installed. - If a package build fails, I install
python3.8-devand system build tools. - If
pipinstalls land in the wrong environment, I use explicit interpreter paths. - If
apt-getcan’t find the version, I switch to source compilation. - If I break the kernel with
update-alternatives, I restart the runtime and revert.
I keep these in mind because most errors are mundane and fixable with the same small set of actions.
Safer patterns for complex workflows
If I’m doing anything complex—training, data processing, or multi-step pipelines—I lean on a few patterns:
- Use scripts instead of ad-hoc cell commands for the legacy interpreter.
- Use the notebook kernel for orchestration and visualization.
- Keep
requirements-py38.txtin Drive or a version-controlled folder. - Write a
run_legacy.pythat takes CLI args, so I can call it predictably.
This turns a messy downgrade into a clean separation of responsibilities: the notebook drives the workflow, and the old interpreter does the specialized work.
Why I still prefer the side-by-side approach
I’ll say it again because it matters: the side-by-side approach is boring, and boring is good. It gives you reliability. The kernel stays stable, your tooling still works, and you can bring in the legacy interpreter only when you need it.
When I try to replace the kernel Python, I end up spending time unbreaking things. That might be acceptable for a short experiment, but it’s not a stable daily workflow.
Final thoughts: treat downgrading like a controlled experiment
Downgrading Python in Colab is never as clean as doing it locally, because the kernel is managed. But it can still be reliable if you treat it like an experiment with controlled variables.
My rules are simple:
- Keep the kernel intact whenever possible.
- Use an explicit interpreter for legacy runs.
- Pin your dependencies and save them.
- Keep a smoke test cell to validate your setup after every reset.
If you approach it this way, the downgrade becomes a repeatable process, not a fragile hack. And that’s the real goal: keep your work moving, keep your environment predictable, and spend your time on the project instead of fighting the runtime.
If I had to boil this down to one sentence: install the old Python, isolate it, and call it explicitly. Everything else flows from that.


