Skip to content
Astra Fellowship

Astra is Constellation’s flagship fellowship, built to accelerate AI safety research and talent.


As AI advances at unprecedented speed, preparing for its risks is critical. Astra brings exceptional people into the field and connects them with leading mentors and research opportunities.

Complete this expression of interest form to be notified when our next cohort launches!

constellation-11-17-25-drew-bird-274-min

Astra is a fully funded, 3-6 month, in-person program at Constellation’s Berkeley research center.

Fellows advance frontier AI safety projects with guidance from expert mentors and dedicated research management and career support from Constellation’s team.

 

Over 80% of Astra’s first cohort are now working full-time in AI safety roles in organizations such as Redwood Research, METR, Anthropic, OpenAI, Google DeepMind, the Center for AI Standards and Innovation, and the UK AI Security Institute. This round, we want to go further: placing even more people into the highest-impact roles, and helping fellows launch new initiatives to tackle urgent but neglected problems.

Three researchers collaborating around one laptop.
Trusted by the best
1
2
3
4
5
6
7
1
2
3
4
5
6
7
1
2
3
4
5
6
7
1
2
3
4
5
6
7
1
2
3
4
5
6
7

What we're looking for


We’re looking for talented people that are excited to pursue new ideas and projects that advance safe AI. You may be a strong fit if you:
  • Are motivated to reduce catastrophic risks from advanced AI

  • Bring technical or domain-specific experience relevant to the focus areas (e.g., technical research, security, governance, policy, strategy, field-building)

  • Would like to transition into a full-time AI safety role or start your own AI safety focused organization

Prior AI safety experience is not required. Many of our most impactful fellows entered from adjacent fields and quickly made significant contributions. If you're interested but not sure you meet every qualification, we’d still encourage you to apply.

One solo figure writing with pen and paper on a couch in front of large windows.
Two researchers deep in discussion while sitting at a table.
Close up image of two people typing on laptops with a cup of tea on the table.
Eli Lifland & Romeo Dean, AI Futures Project

“Astra was a really important program for us. We first started working with Daniel Kokotajlo through Astra, and the scenario we started developing during the program eventually became AI 2027. After Astra, we also co-founded the AI Futures Project together.”

Michael Chen, METR

"I worked with METR during Astra, and joined METR’s policy team immediately after the fellowship. Participating in Astra directly led to my current role.”

Aryan Bhatt, Redwood Research

“Astra was an incredibly important opportunity. I was able to work closely with my mentor Buck, who taught me a lot about doing good research. That eventually led me to my role at Redwood Research, where I now run an entire team.”

Martin Soto, Member of Technical Staff, UK AISI

"I'm really glad I participated in Astra! Endless conversations (both with my mentor and everyone else at Constellation), ranging from theoretical alignment to frontier governance, were an invaluable source of learning, knowledge and opportunities."

Mentors

Empirical Research
Field Building
Governance & Policy
Security
Strategy
Joel Becker

Joel Becker

METR
Joe Benton

Joe Benton

Anthropic
jan_betley

Jan Betley

Truthful AI
Aryan Bhatt

Aryan Bhatt

REDWOOD RESEARCH
Sam Bowman

Sam Bowman

Anthropic
Trenton Bricken

Trenton Bricken

Anthropic
Collin Burns

Collin Burns

Anthropic
James Chua

James Chua

Truthful AI
Asa Cooper-Stickland

Asa Cooper-Stickland

UK AISI
Scott Emmons

Scott Emmons

Anthropic
owain_evans

Owain Evans

TRUTHFUL AI
Kyle Fish

Kyle Fish

Anthropic
Ryan Greenblatt

Ryan Greenblatt

Redwood Research
Charlie+Griffin

Charlie Griffin

UK AISI
marius-hobbhahn

Marius Hobbhahn

APOLLO RESEARCH
erik_jenner

Erik Jenner

GOOGLE DEEPMIND
Erik Jones

Erik Jones

Anthropic
Megan Kinniment

Megan Kinniment

METR
Jan Leike

Jan Leike

Anthropic
David-Lindner

David Lindner

GOOGLE DEEPMIND
Jack Lindsey

Jack Lindsey

Anthropic
Sam Marks

Sam Marks

Anthropic
Stephen McAleer

Stephen McAleer

OpenAI
neev_parikh

Neev Parikh

METR
Ethan Perez

Ethan Perez

Anthropic
Sara Price

Sara Price

Anthropic
Alec Radford

Alec Radford

Formerly OpenAI
Fabien Roger

Fabien Roger

Anthropic
Buck Shlegeris

Buck Shlegeris

Redwood Research
Jascha Sohl Dickstein

Jascha Sohl Dickstein

Anthropic
Alex_Tamkin

Alex Tamkin

Anthropic
Mia Taylor

Mia Taylor

Forethought
Sydney Von Arx

Sydney Von Arx

METR
miles_wang

Miles Wang

OpenAI
Alexandra Bates

Alexandra Bates

Constellation Institute
Arden Koehler

Arden Koehler

80,000 Hours
Isabella Duan

Isabella (Fengyu) Duan

Safe AI Forum
Ben Chang

Ben Chang

Formerly OSTP, CSET
Fynn Heide

Fynn Heide

Safe AI Forum
Saad Siddiqui

Saad Siddiqui

Safe AI Forum
Nicholas Carlini

Nicholas Carlini

Anthropic
Ben Angel Chang

Ben Chang

Formerly OSTP, CSET
buck_shlegeris

Buck Shlegeris

Redwood Research
Keri Warr

Keri Warr

Anthropic
Hazel Browne

Hazel Browne

Coefficient Giving
tom_davidson

Tom Davidson

FORETHOUGHT
Raymond Douglas

Raymond Douglas

Telic Research
Lukas Finnveden

Lukas Finnveden

Redwood Research
Ryan Greenblatt

Ryan Greenblatt

Redwood Research
Rose Hadshar

Rose Hadshar

Forethought
Fynn Heide

Fynn Heide

Safe AI Forum
Daniel Kokotajlo

Daniel Kokotajlo

AI Futures Project
Thomas Larsen

Thomas Larsen

AI Futures Project
Eli Lifland

Eli Lifland

AI Futures Project
Will Macaskill

Will MacAskill

Forethought
Jake Mendel

Jake Mendel

Coefficient Giving
fin_moorhouse

Fin Moorhouse

FORETHOUGHT
Max Nadeau

Max Nadeau

Coefficient Giving
julian_stastny

Julian Stastny

REDWOOD RESEARCH
Mia_Taylor

Mia Taylor

Forethought
Empirical Research
Empirical Research
Field Building
Governance & Policy
Security
Strategy
Joel Becker

Joel Becker

METR
Joe Benton

Joe Benton

Anthropic
jan_betley

Jan Betley

Truthful AI
Aryan Bhatt

Aryan Bhatt

REDWOOD RESEARCH
Sam Bowman

Sam Bowman

Anthropic
Trenton Bricken

Trenton Bricken

Anthropic
Collin Burns

Collin Burns

Anthropic
James Chua

James Chua

Truthful AI
Asa Cooper-Stickland

Asa Cooper-Stickland

UK AISI
Scott Emmons

Scott Emmons

Anthropic
owain_evans

Owain Evans

TRUTHFUL AI
Kyle Fish

Kyle Fish

Anthropic
Ryan Greenblatt

Ryan Greenblatt

Redwood Research
Charlie+Griffin

Charlie Griffin

UK AISI
marius-hobbhahn

Marius Hobbhahn

APOLLO RESEARCH
erik_jenner

Erik Jenner

GOOGLE DEEPMIND
Erik Jones

Erik Jones

Anthropic
Megan Kinniment

Megan Kinniment

METR
Jan Leike

Jan Leike

Anthropic
David-Lindner

David Lindner

GOOGLE DEEPMIND
Jack Lindsey

Jack Lindsey

Anthropic
Sam Marks

Sam Marks

Anthropic
Stephen McAleer

Stephen McAleer

OpenAI
neev_parikh

Neev Parikh

METR
Ethan Perez

Ethan Perez

Anthropic
Sara Price

Sara Price

Anthropic
Alec Radford

Alec Radford

Formerly OpenAI
Fabien Roger

Fabien Roger

Anthropic
Buck Shlegeris

Buck Shlegeris

Redwood Research
Jascha Sohl Dickstein

Jascha Sohl Dickstein

Anthropic
Alex_Tamkin

Alex Tamkin

Anthropic
Mia Taylor

Mia Taylor

Forethought
Sydney Von Arx

Sydney Von Arx

METR
miles_wang

Miles Wang

OpenAI
Alexandra Bates

Alexandra Bates

Constellation Institute
Arden Koehler

Arden Koehler

80,000 Hours
Isabella Duan

Isabella (Fengyu) Duan

Safe AI Forum
Ben Chang

Ben Chang

Formerly OSTP, CSET
Fynn Heide

Fynn Heide

Safe AI Forum
Saad Siddiqui

Saad Siddiqui

Safe AI Forum
Nicholas Carlini

Nicholas Carlini

Anthropic
Ben Angel Chang

Ben Chang

Formerly OSTP, CSET
buck_shlegeris

Buck Shlegeris

Redwood Research
Keri Warr

Keri Warr

Anthropic
Hazel Browne

Hazel Browne

Coefficient Giving
tom_davidson

Tom Davidson

FORETHOUGHT
Raymond Douglas

Raymond Douglas

Telic Research
Lukas Finnveden

Lukas Finnveden

Redwood Research
Ryan Greenblatt

Ryan Greenblatt

Redwood Research
Rose Hadshar

Rose Hadshar

Forethought
Fynn Heide

Fynn Heide

Safe AI Forum
Daniel Kokotajlo

Daniel Kokotajlo

AI Futures Project
Thomas Larsen

Thomas Larsen

AI Futures Project
Eli Lifland

Eli Lifland

AI Futures Project
Will Macaskill

Will MacAskill

Forethought
Jake Mendel

Jake Mendel

Coefficient Giving
fin_moorhouse

Fin Moorhouse

FORETHOUGHT
Max Nadeau

Max Nadeau

Coefficient Giving
julian_stastny

Julian Stastny

REDWOOD RESEARCH
Mia_Taylor

Mia Taylor

Forethought

Application

Applications Open
Aug 28
Applications close
Sep 26

Decisions & Onboarding

Acceptances sent
Nov 6
Onboarding finishes
Dec 31

 

Program starts
Jan 5
Program ends
Mar 31
Extension starts
Mar 31
Extension ends
Jun 31
Fellowship Benefits

We provide the resources and support needed for fellows to pursue full-time research.

Receipt

Stipends

Competitive financial support for the duration of the program.

magnify

Research Budget

~$15K per fellow per month for compute.

envelope

Visa Support

We provide support and guidance for international applicants navigating the visa process.

Additional benefits

1
Workspace & Community
Ongoing collaboration at Constellation’s Berkeley research center, where you’ll have access to ~150 network participants and ongoing AI safety focused convenings (e.g., shared daily meals, seminars, workshops, table-top-exercises, conferences).  
2
Mentorship & Research Management
Weekly mentorship from senior experts and research management support from Constellation’s team (via 1:1s, small group meetings, office hours, Slack collaboration).
3
Placement Services
Many fellows are expected to join organizations that are participating in Astra. Others are actively connected to opportunities across our network.
4
Incubation Services
We also provide advisory services for fellows launching new projects and organizations (e.g., business operations, communication, hiring, fundraising & more).

What we're looking for

We’re looking for talented people that are excited to pursue new ideas and projects that advance safe AI. 

You may be a strong fit if you:
  • Are motivated to reduce catastrophic risks from advanced AI
  • Bring technical or domain-specific experience relevant to the focus areas (e.g., technical research, security, governance, policy, strategy, field-building)
  • Would like to transition into a full-time AI safety role or start your own AI safety focused organization

Prior AI safety experience is not required. Many of our most impactful fellows entered from adjacent fields and quickly made significant contributions. If you're interested but not sure you meet every qualification, we’d still encourage you to apply.

How to apply

Applications for our January 2026 cohort are now officially closed!

Complete this expression of interest form to be notified when our next cohort launches—possibly as soon as Summer 2026!

Harry Mayne, Spring 2026 Cohort
"Working with Owain Evans gave me the opportunity to learn how to conduct great empirical research and communicate it effectively. The mentors, organization, and cohort are constantly challenging your way of thinking and helping you develop into the best version of yourself as a researcher."
Yong Zheng-Xin, Spring 2026 Cohort
"I love being part of a diverse community of smart and driven people, who all care deeply about AI safety. Being an Astra Fellow has given me direct access to experienced researchers, which has been incredibly meaningful for my growth."

Join the next cohort

Complete this expression of interest form to be notified when our next cohort launches!

Person wears Constellation hoodie. They are laughing and smiling while two other people at the table smile back.

Explore other programs and discover new ways to engage with our network