Get a semantic layer and BI on top of your Supabase data in ~15 minutes. No dbt. No data engineering background needed. Just your Supabase tables, some YAML, and Lightdash YAML.
You'll end up with charts, dashboards, and an AI agent that can answer questions about your data in plain English.
- A Mac with Homebrew
- A Supabase project with some data in it (or about to have data — if you're building an app that writes to Supabase, that counts)
If you don't have one yet:
- Go to supabase.com/dashboard and sign in (or create an account)
- Click New Project
- Pick an org, give it a name, and save your database password — you'll need it later for Lightdash
- Enable the Data API if prompted
- Wait for it to provision (~1–2 min)
However your app works — Supabase API, a script, CSV import via Table Editor, raw SQL — just make sure you have at least one table with data before moving on.
- Go to app.lightdash.cloud/register
- Sign up and verify your email
- You'll land on a project setup wizard — stop here, don't click anything yet!
Leave this tab open. We'll come back to it in step 6.
brew tap lightdash/lightdash
brew install lightdashLog in:
lightdash login https://app.lightdash.cloudThis pops open your browser to authorize the CLI.
Install AI copilot skills (this loads Lightdash reference docs into Cursor, Claude Code, etc.):
lightdash install-skillsThis is worth doing — it means your AI copilot already knows the Lightdash YAML format and can generate models for you.
This is where you describe your data to Lightdash — what the columns mean, what metrics to compute, how to label things for humans. Full docs here: Lightdash YAML guide
mkdir -p lightdash/modelsCreate lightdash.config.yml in your project root:
warehouse:
type: postgresFirst, grab your table structure from Supabase:
- Go to SQL Editor in the Supabase dashboard
- Run:
SELECT table_name, column_name, data_type
FROM information_schema.columns
WHERE table_schema = 'public'
ORDER BY table_name, ordinal_position;- Copy the output
If you ran lightdash install-skills in step 4, your AI copilot already knows the Lightdash YAML format. Paste the table structure and ask for a model.
Example prompt:
Here's my Supabase table structure:
table_name | column_name | data_type ------------+------------------+--------------------------- my_readings | id | bigint my_readings | recorded_at | timestamp with time zone my_readings | sensor_name | text my_readings | value | double precisionCreate a Lightdash YAML model file at
lightdash/models/my_readings.ymlfor this table. Include useful metrics and set up time intervals on any timestamp columns. Runlightdash lintwhen done.
Your copilot will generate a valid model with dimensions, metrics, labels, and descriptions — and validate it.
Create a YAML file at lightdash/models/your_table.yml:
type: model
name: my_readings
label: My Readings
description: Sensor readings over time
sql_from: public.my_readings
metrics:
total_readings:
type: count
sql: ${TABLE}.id
description: Total number of readings
avg_value:
type: average
sql: ${TABLE}.value
label: Average Value
round: 1
dimensions:
- name: id
type: number
sql: ${TABLE}.id
hidden: true
- name: recorded_at
type: timestamp
sql: ${TABLE}.recorded_at
label: Recorded At
time_intervals:
- RAW
- HOUR
- DAY
- WEEK
- MONTH
- name: sensor_name
type: string
sql: ${TABLE}.sensor_name
label: Sensor Name
- name: value
type: number
sql: ${TABLE}.value
label: Value
round: 1Key things:
sql_from— your Supabase table:public.your_tabledimensions— one per column. Types:number,string,timestamp,date,booleanmetrics— aggregations:count,average,sum,min,max. Every metric needs an explicitsqlfield (evencount)time_intervals— add on timestamp/date columns for hour/day/week/month groupinglabelanddescription— these power Lightdash's AI agent, so make them descriptive- Full schema spec: model-as-code-1.0.json
lightdash lint✓ All Lightdash Code files are valid!
If you see errors, fix them before deploying. Lint is fast — run it every time you edit YAML.
After any model change (adding columns, metrics, renaming things), you need to re-deploy:
lightdash deploy --no-warehouse-credentialsThis pushes your updated model to Lightdash. Charts and dashboards reference model fields — if the model is stale, queries will fail. See step 7 for full deploy details.
Go back to the Lightdash tab you left open in step 3 (the setup wizard).
- Click "Create manually"
- Click "I've already defined them"
This takes you to the connection form.
- In the Supabase dashboard, click Connect at the top of your project
- Select the Shared Pooler tab
- Click "View parameters"
- Host — the pooler host (e.g.
aws-0-us-east-1.pooler.supabase.com) - Database —
postgres - User —
postgres.xxxx(the full string with the project ref after the dot) - Password — the database password you saved in step 1
Click Advanced — this section looks optional but it is not:
- Port — match what Supabase shows (usually
6543for transaction mode,5432for session mode) - SSL mode — set to
no-verify. If you skip this you'll get "self-signed certificate in certificate chain" and nothing will work
For the dbt project section: select "CLI" and ignore everything else. This section is irrelevant for Lightdash YAML users, but you have to pick something.
Hit Save & Test.
Why the shared pooler? Lightdash Cloud connects from their servers, not your laptop. The direct Supabase host (
db.xxxx.supabase.co) may not resolve from Lightdash's infrastructure. The shared pooler always works.
Now push your Lightdash YAML models to the project:
lightdash deploy --no-warehouse-credentialsIf this is your first time and the wizard didn't create a project yet, use:
lightdash deploy --create --no-warehouse-credentialsIt'll ask for a project name — hit Enter for the default.
Subsequent deploys (after editing YAML) are just:
lightdash deploy --no-warehouse-credentialsLightdash supports charts and dashboards as YAML files too — same workflow as models. You can build them in the UI and download them, or generate them with your AI copilot.
Your copilot already has the Lightdash chart and dashboard schemas loaded (from lightdash install-skills). Just tell it what you want:
Create a line chart showing average CO2 over time at minute resolution, using my airlab_readings model. Put it in
lightdash/charts/co2-over-time.yml.
Or for a dashboard:
Create a dashboard at
lightdash/dashboards/overview.ymlwith KPI tiles for reading count, avg CO2, avg temperature, avg humidity across the top, then time series charts below for CO2, temperature, humidity, pressure, and VOC.
Charts go in lightdash/charts/ and dashboards go in lightdash/dashboards/.
tableNameandmetricQuery.exploreName— both should be your model name (e.g.airlab_readings)- Field IDs follow the pattern
modelname_fieldnamefor dimensions/metrics, andmodelname_fieldname_intervalfor time intervals (e.g.airlab_readings_timestamp_minute) chartConfig.type—cartesian(line/bar/area),big_number(KPI),pie,table,gauge,funnel, etc.tableConfig.columnOrder— required, list the field IDs used in the chartspaceSlug— charts and dashboards in the same space slug are grouped together
This is a separate command from lightdash deploy:
# First time — use --force since files are new
lightdash upload --force --include-charts
# Subsequent uploads
lightdash upload --include-chartsImportant:
lightdash deploypushes model changes (dimensions, metrics, explores).lightdash uploadpushes charts and dashboards. They're separate commands — you'll often need to run both after making changes.
your-project/
├── lightdash.config.yml
├── lightdash/
│ ├── models/
│ │ └── your_table.yml # Model (dimensions, metrics)
│ ├── charts/
│ │ ├── kpi-total-readings.yml # Big number KPI
│ │ ├── co2-over-time.yml # Line chart
│ │ └── ...
│ └── dashboards/
│ └── overview.yml # Dashboard referencing charts
└── README.md
Go to your Lightdash project in the browser. You should see your model in the sidebar and your dashboard in the space you created — click in and start exploring:
- Pick dimensions and metrics, hit Run query
- Open your dashboard to see all charts at once
- Try the AI agent — ask it questions about your data in plain English
That's it. Full semantic layer, charts, dashboards, and AI — running on Supabase, no dbt.
- Let AI write your YAML — after
lightdash install-skills, just paste your table structure and ask. It's fast. - Run
lightdash lintconstantly — it's instant and catches most errors before deploy - Good descriptions = good AI —
labelanddescriptionfields directly improve what Lightdash's AI agent can do - Every metric needs
sql— evencounttypes needsql: ${TABLE}.column - Supabase SQL Editor is great for quick data checks and schema grabs
deployvsupload—lightdash deployfor models,lightdash upload --include-chartsfor charts/dashboards. You need both.
We set this up end-to-end for a hackathon and hit some sharp edges. These are notes for ourselves and anyone else who runs into the same things.
The setup wizard is built for dbt users. If you're using Lightdash YAML, you have to guess your way through:
- "Create manually" — not obvious this is the right path
- "I've already defined them" — the metrics prompt wording is dbt-centric
- The dbt project section is irrelevant — you have to pick "CLI" and ignore the rest, which feels wrong
- No Supabase preset — just "Postgres". A Supabase option could auto-fill the pooler host, suggest
no-verifySSL, and skip the dbt section - Advanced settings look optional but are critical — SSL mode and port live here, and you can't connect to Supabase without changing them
A dedicated "I'm using Lightdash YAML" or "I don't use dbt" path would fix most of this.
Hitting Save & Test reports success but doesn't verify the connection actually works. You don't find out it's broken until you try SQL Runner or run a query. There's no way to:
- Run a test query against the warehouse
- See connection error logs
- Distinguish "saved config" from "can actually reach the database"
A real connection test (even just SELECT 1) with raw error output would save a lot of debugging.
The default SSL mode causes "self-signed certificate in certificate chain" with Supabase. You have to set it to no-verify — even require doesn't work. Suggestions:
- Default to
no-verify(works with most hosted Postgres) - Show a better error message suggesting the fix
- If the host looks like
*.pooler.supabase.com, auto-suggest the right SSL config
Supabase shows several connection options (direct, shared pooler, session pooler, transaction pooler). The direct host doesn't resolve from Lightdash Cloud. There's no hint about this in the Lightdash connection form. A note like "If using Supabase, use the shared pooler host" would save people a confusing ENOTFOUND error.
A count metric without an explicit sql field passes lightdash lint but fails during lightdash deploy:
ERROR> my_model : Metric "my_count" in table "my_model" is missing a sql definition
Lint should catch everything deploy would reject.
The first deploy prompts interactively for a project name. Can't be piped or passed as a flag, which is annoying for scripts and CI.
Warning: CLI (0.2467.0) is running a different version than Lightdash (0.2459.3)
brew install pulls the latest release, which is often ahead of cloud. The warning says "consider upgrading" when you're already newer. Should only warn when the CLI is behind.
- Does the setup wizard's "Save & Test" actually create the project? If so, do you need
lightdash deploy --createat all, or justlightdash deploy? If both create projects, do you end up with duplicates? The interaction between the wizard flow and the CLI--createflag is unclear. - What does "Save & Test" actually test? It seems to save the config and maybe check the connection at the TCP level, but it doesn't surface errors like wrong SSL mode, bad credentials, or unreachable hosts in any useful way. What's it doing under the hood?
- Should
requireSSL mode work with Supabase? In standard Postgres semantics,requiremeans "encrypt but don't verify certs" — so it should work with Supabase's pooler. The fact that onlyno-verifyworks suggests Lightdash'srequiremode might be doing cert verification (which would beverify-cabehavior). Is this intentional?