TNS
VOXPOP
As a JavaScript developer, what non-React tools do you use most often?
Angular
0%
Astro
0%
Svelte
0%
Vue.js
0%
Other
0%
I only use React
0%
I don't use JavaScript
0%
NEW! Try Stackie AI
AI Engineering / AI Infrastructure / Open Source

PyTorch Foundation expands its AI stack with Safetensors, ExecuTorch, and Helion

PyTorch Foundation expands its open source AI stack with Safetensors, ExecuTorch, and Helion to improve model security, inference, and performance portability.
Apr 9th, 2026 3:18pm by
Featued image for: PyTorch Foundation expands its AI stack with Safetensors, ExecuTorch, and Helion
Source: Puzzle Creative from Unsplash+

This week at PyTorch Conference EU in Paris, the PyTorch Foundation announced a trio of new projects joining its portfolio: Safetensors, ExecuTorch, and Helion

Under the Linux Foundation, the PyTorch Foundation is a community-driven hub supporting the open-source PyTorch framework and a broader portfolio of open-source AI projects, such as DeepSeed, Ray, and vLLM. 

Together, these projects provide vendor-neutral infrastructure for the entire AI lifecycle, from training through inference. By bringing Safetensors, ExecuTorch, and Helion into the fold, the foundation strengthens its position as the vendor-neutral hub for open-source AI. 

Safetensors brings secure model distribution

On Tuesday, Safetensors took center stage in PyTorch Foundation news as the newest foundation-hosted project. 

It was Hugging Face, the open-source AI platform, that developed Safetensors in 2022 and has since maintained it, watching it grow into one of the most widely used tensor serialization formats in the open-source machine learning ecosystem. Now a part of the PyTorch fold, Safetensors will help enable secure model distribution to minimize security risks associated with model architectures and execution. 

With developers working on new AI models at breakneck speeds, security risks are also rapidly proliferating, making Safetensors a timely addition to the PyTorch Foundation’s portfolio — and a win for the industry at large.

Unlike other formats (like pickle) that allow (potentially nefarious) developers to execute untested code in model files, Safetensors serves as a “table of contents” for AI model data, preventing arbitrary code execution and thus improving safety during model sharing. 

With developers working on new AI models at breakneck speed, security risks are also proliferating rapidly, making Safetensors a timely addition to the PyTorch Foundation’s portfolio — and a win for the industry at large. In the foundation’s announcement, executive director Mark Collier called Safetensors’contribution “an important step towards scaling production-grade AI models.” 

From ExecuTorch, greater on-demand inference capabilities

Also on Tuesday, the PyTorch Foundation welcomed ExecuTorch as a PyTorch Core project. 

First introduced publicly at a PyTorch Conference in 2023, ExecuTorch began at Meta with the aim of simplifying the running of PyTorch models in edge and on-device environments (e.g., mobile phones, AR/VR headsets). 

Specifically, as stated in the PyTorch Foundation’s announcement, ExecuTorch was designed with four core principles in mind: 1) end-to-end developer experience; 2) portability across hardware; 3) small, modular, and efficient; 4) open by default. 

In the last couple of years, the runtime has moved from being an internal tool to an open platform for on-device AI. It now not only supports model deployment for Meta products but has also found its place among a broader audience, helping developers productionize PyTorch-based models on edge devices, including for AR/VR experiences, computer vision, sensor processing, and generative AI and LLM-based assistants. 

Now a PyTorch core project under the PyTorch Foundation, ExecuTorch will extend PyTorch’s functionality to enable efficient AI inference on edge devices. By joining the foundation, it will also benefit from its vendor-neutral governance, open-source structure, and clear IP, trademark, and branding (Meta will remain a major contributor but will bear no independent control over the project). 

Helion standardizes AI kernel development

Helion also joined the PyTorch fold on Tuesday, adding to the foundation’s list of open source AI projects 

A Python-embedded domain-specific language (DSL) for authoring machine learning kernels, Helion comes to the PyTorch Foundation with the goal of simplifying kernel development across the open AI ecosystem. 

Specifically, as outlined in the PyTorch Foundation’s announcement, it aims to “raise the level of abstraction compared to kernel languages, making it easier to write efficient kernels while enabling more automation in the autotuning process.” 

Like Safetensors’ and ExecuTorch’s arrival, Helion’s entry into the PyTorch Foundation’s portfolio comes at a good time. The AI era is starting to shift from primarily training models to running at-scale inference — and with this shift comes demands for higher-level performance portability across diverse hardware. 

By arming developers with higher-level abstractions and ahead-of-time autotuning, Helion should make it easier to write high-performance, hardware-portable machine learning kernels. 

Expanding the open source AI stack

As the AI industry shifts from training models to deploying and scaling in production, it raises new questions about security, performance, and portability. By bringing Safetensors, ExecuTorch, and Helion under its umbrella, the PyTorch Foundation not only expands its project portfolio but also strengthens the entire open-source AI stack.

Group Created with Sketch.
TNS DAILY NEWSLETTER Receive a free roundup of the most recent TNS articles in your inbox each day.