+21k

Building the future of AI together

  • 25k

    GitHub Stars

  • 25k

    Collaborators

  • Xk

    Contributions

Recent MAX projects

All community MAX projects

Backend to torch.compile

Enable PyTorch models to run seamlessly on the MAX backend using torch.compile. Boost performance, simplify deployment, and unlock true cross-hardware flexibility.

Gabriel de Marmiesse

@Kyutai

Custom Models with MAX

Qwerky is deploying their custom Mamba models across NVIDIA and AMD. Qwerky's engineers can write a selective scan operation for their Mamba model once in Mojo and have it automatically optimized for NVIDIA tensor cores and AMD matrix engines without any additional code changes.

Evan Owen

@Qwerky

Nabla

Nabla is a Machine Learning library featuring PyTorch-like gradient computation (imperatively via .backward()), JAX-like composable function transformations (grad vmap, jit, etc.) and custom differentiable CPU/GPU kernels

Tilli Fe

Champion 🏆

Recent Mojo projects

All community Mojo projects

NuMojo

NuMojo is a library for numerical computing in Mojo 🔥 It aims to encompass the extensive numerics capabilities found in Python packages such as NumPy, SciPy, and Scikit-learn.

Mojo Numerics & Algorithms Group

7 Contributors

Kelvin

A powerful dimensional analysis library written in Mojo for all your scientific computing needs.

Brian Grenier

Champion 🏆

DeciMojo

DeciMojo provides an arbitrary-precision decimal and integer mathematics library for Mojo, delivering exact calculations for financial modeling, scientific computing, and applications where floating-point approximation errors are unacceptable.

Yuhao Zhu

Mosaic

Mosaic is the first computer vision library built specifically for heterogenous compute, unifying the workflow into one language that runs on any hardware.

Christian Bator

Ember

Ember is a quantum computing framework that provides tools for building, simulating, and analyzing quantum circuits.

Adam Smith

CombustUI

CombustUI (WIP) is a GUI library for Mojo, built on top of FLTK (Fast Light Toolkit) from C++. It provides low-level control by directly calling FLTK functions.

Hammad Ali

Modular Events

We are always looking to connect with our community through events, meetups, talks, hackathons, and more.  Browse through our upcoming events below.

Spotlight:

Event details:

Date: Thursday, December 11 6:00 PM - 9:00 PM
Location: Los Altos, California (RSVP to see address)

Join us at Modular’s Los Altos office for a deep dive into the MAX platform.

​​​

Talks include:

  • ​​​Feras Boulala, Modular – MAX’s JIT Graph Compiler
  • ​​​Ehsan Kermani, Modular – MAX Framework Model Development API
  • ​​​Michael Dunn-OConnor, Modular – Build an LLM in MAX

​​​Enjoy complimentary refreshments, engage in Q&A with the speakers, and network with other builders.

​​​Please note that in-person space is limited and is first come, first served. Doors open at 6 PM PT and a valid Luma event registration is required for admission. Arriving late may result in being turned away if the venue has reached capacity. We recommend arriving by 6 PM PT, if possible.

​​​There will also be a virtual livestream option for attendance.

​​​Agenda

​​​6 PM PT | Doors open

​​​6:30 PM PT | Talks start

​​​7:45-9 PM PT | Networking

All Events

Event details:

In October’s Community Meeting, we explored the latest innovations from the community and the Modular team. Martin Vuyk shared a generic FFT implementation in Mojo, Gabriel de Marmiesse presented a MAX backend for PyTorch, and Brad Larson walked through the Modular 25.6 release, now offering unified GPU support for NVIDIA, AMD, and Apple.Watch to see demos, insights, and discussion from the community and the Modular team!

Event details:

For our 20th community meeting, Modular hosted a Q&A with the Mojo team to discuss the recently released Mojo vision and roadmap documents.

We also heard from:

Bernardo Taveira on “Porting GSplat Kernels to Mojo”
Seif Lotfy on “Datastructures for DB Development"

Event details:

Join us at Modular’s Los Altos office for an evening of big ideas and engineering insights on bringing AI from concept to production.

Talks include:

  • Chris Lattner, Modular – The future of democratizing AI compute and the role of open collaboration in accelerating progress.
  • Feifan Fan, Inworld AI – How to integrate state-of-the-art voice AI into consumer applications and make it production ready, featuring insights from Inworld’s collaboration with Modular.
  • Chris Hoge, Modular – Why matrix multiplication remains one of the most challenging problems in computer science and how the Modular stack helps developers optimize it.

​Enjoy complimentary refreshments, engage in Q&A with the speakers, and network with other builders.

Please note that in-person space is limited and is first come, first served. Doors open at 6 PM PT and a valid Luma event registration is required for admission. Arriving late may result in being turned away if the venue has reached capacity. We recommend arriving by 6 PM PT, if possible.

​There will also be a virtual livestream option for attendance.

Agenda

​6 PM PT | Doors open

​6:30 PM PT | Talks start

​7:45-9 PM PT | Networking

Event details:

During our August Modular Community Meeting, Manuel walks us through optimizations to his Mojo regular expression library, mojo-regex, covering architecture decisions, a hybrid DFA/NFA engine, SIMD optimizations, benchmarking challenges, and the future roadmap. Modular intern Amir then shares his work enabling Apple GPU support in Modular, explaining the technical hurdles, compiler changes, and design choices required. Finally, we wrap up with Q&A with the team.

Developer Champions

Modular Champions are standout community members building with Mojo and MAX, sharing knowledge, and shaping the future of AI compute.

Owen Hilyard

Seth Stadick

Brian Grenier

Martin Vuyk

Sawyer Bergeron

Valentin Erokhin

Sora

Maxim Zaks

Gabriel de Marmiesse

Tilli Fe

Max Brylski

Build the future of AI with Modular

View Editions
  • Get started guide

    Install MAX with a few commands and deploy a GenAI model locally.

    Read Guide
  • Browse open models

    500+ models, many optimized for lightning-fast performance

    Browse models