- Start Date: 2025-05-16
- RFC PR: electron/rfcs#17
- Electron Issues: electron/electron#46779
- Reference Implementation: electron/electron#46811
- Status: Active
This feature introduces a way to import a native shared texture handle into Electron, specifically as a VideoFrame. VideoFrame natively supports several web rendering systems, including WebGPU and WebGL. This enables developers to integrate arbitrary native-rendered content with their web applications.
output.mp4
While Offscreen Rendering allows developers to export a shared texture for embedding the browser's output into their own rendering programs efficiently, there are cases where developers may want the reverse: to import images into the browser. For example, broadcasting apps may use this feature to embed various input sources with modern frontend technologies, providing a contemporary user experience. By using shared textures, we can maximize performance and provide a standardized way to bridge images between Electron and native rendering programs.
A new sharedTexture module will be added, available in both the sandboxed renderer process and the main process. This module provides the primary interface for importing external shared texture handles. The module handles lifetime management automatically, while still providing subtle APIs for advanced users.
This method is only available in the main process.
optionsObject - Options for importing a shared texture.textureInfoSharedTextureImportTextureInfo - The information of shared texture to import.allReferenceReleasedFunction (optional) - Called when all references in all processes are released, you should keep native texture valid until this callback is called.
Imports the shared texture from the given options.
Returns SharedTextureImported - The imported shared texture.
This method is only available in the main process.
optionsObject - Options for sending a shared texture.frameWebFrameMain - The target frame to transfer the shared texture to. ForwebContentsyou can passwebContents.mainFrameas target. If you provide awebFrameMainthat is not a main frame, you'll need to enablewebPreferences.nodeIntegrationInSubFramesfor this, since this feature requires IPC between main and the frame.importedSharedTextureSharedTextureImported - The imported shared texture.
...argsany[] - Additional arguments to pass to the renderer process.
Send the imported shared texture to a renderer process. You must register a receiver in the renderer process before calling this method. This method has a 1000ms timeout; ensure the receiver is set and the renderer process is alive before calling this method.
Returns Promise<void> - Resolves when the transfer completes.
This method is only available in the renderer process.
callbackFunction<Promise<void>> - The function to receive the imported shared texture.receivedSharedTextureDataObject - The data received from the main process.importedSharedTextureSharedTextureImported - The imported shared texture.
...argsany[] - Additional arguments passed from the main process.
Set a callback to receive imported shared textures from the main process.
sharedTexture.subtle SharedTextureSubtle
Provides subtle APIs to interact with an imported shared texture for advanced users.
pixelFormatstring - The pixel format of the texture.colorSpaceColorSpace (optional) - The color space of the texture.codedSizeSize - The full dimensions of the shared texture.visibleRectRectangle (optional) - A subsection of [0, 0, codedSize.width, codedSize.height]. In common cases, it is the full section area.timestampnumber (optional) - A timestamp in microseconds that will be reflected toVideoFrame.handleSharedTextureHandle - The shared texture handle.
textureIdstring - The unique identifier of this imported shared texture.getVideoFrameFunction<VideoFrame> - Create aVideoFramethat uses the imported shared texture in the current process. You can callVideoFrame.close()once you've finished using it; the underlying resources will wait for GPU completion internally.releaseFunction - Release this object's reference to the imported shared texture. The underlying resource will be alive until every reference is released.subtleSharedTextureImportedSubtle - Provides subtle APIs to interact with imported shared texture for advanced users.
importSharedTextureFunction<SharedTextureImportedSubtle> - Imports the shared texture from the given options. Returns the imported shared texture.textureInfoSharedTextureImportTextureInfo - The information of shared texture to import.
finishTransferSharedTextureFunction<SharedTextureImportedSubtle> - Finishes the transfer and returns the imported shared texture from the transfer object.transferSharedTextureTransfer - The transfer object of the shared texture.
syncTokenstring - Opaque data for the sync token.
transferstring - Opaque transfer data for the shared texture; can be transferred across Electron processes.syncTokenstring - The opaque sync token data for frame creation.pixelFormatstring - The pixel format of the texture.codedSizeSize - The full dimensions of the shared texture.visibleRectRectangle - A subsection of [0, 0, codedSize.width, codedSize.height]. In common cases, it is the full section area.timestampnumber - A timestamp in microseconds that will be reflected toVideoFrame.
Do not modify any property; use sharedTexture.subtle.finishTransferSharedTexture to get SharedTextureImported back.
getVideoFrameFunction<VideoFrame> - Create aVideoFramethat uses the imported shared texture in the current process. You can callVideoFrame.close()once you've finished using it; the underlying resources will wait for GPU completion internally.releaseFunction - Release the resources. If you transferred and obtained multipleSharedTextureImported, you must callreleaseon each of them. The resource in the GPU process will be finally destroyed when the last one is released.callbackFunction (optional) - Callback when GPU command buffer finished using this shared texture. It provides a precise event to safely release dependent resources. For example, if this object is created byfinishTransferSharedTexture, you can use this callback to safely release the original one that calledstartTransferSharedTexturein other processes. You can also release the source shared texture that was used toimportSharedTexturesafely.
startTransferSharedTextureFunction<SharedTextureTransfer> - Create aSharedTextureTransferthat can be serialized and transferred to other processes.getFrameCreationSyncTokenFunction<SharedTextureSyncToken> - For advanced users. Typically called afterfinishTransferSharedTexture, and passed to the object that calledstartTransferSharedTextureto prevent the source object from releasing the underlying resource before the target object actually acquires the reference in the GPU process asynchronously.setReleaseSyncTokenFunction - For advanced users. If used, this object's underlying resource will not be released until the set sync token is fulfilled in the GPU process. By using sync tokens, users are not required to use release callbacks for lifetime management.syncTokenSharedTextureSyncToken - The sync token to set.
ntHandleBuffer (optional) Windows - NT HANDLE holds the shared texture. Note that this NT HANDLE is local to the current process.ioSurfaceBuffer (optional) macOS - IOSurfaceRef holds the shared texture. Note that this IOSurface is local to the current process (not global).nativePixmapObject (optional) Linux - Structure contains planes of shared texture.planesObject[] Linux - Each plane's info of the shared texture.stridenumber - The strides and offsets in bytes to be used when accessing the buffers via a memory mapping. One per plane per entry.offsetnumber - The strides and offsets in bytes to be used when accessing the buffers via a memory mapping. One per plane per entry.sizenumber - Size in bytes of the plane. This is necessary to map the buffers.fdnumber - File descriptor for the underlying memory object (usually dmabuf).
modifierstring Linux - The modifier is retrieved from GBM library and passed to EGL driver.supportsZeroCopyWebGpuImportboolean Linux - Indicates whether zero-copy import to WebGPU is supported.
In this section, we will use the output of offscreen rendering as an example of an external shared texture, importing and rendering it in another window. We use the managed version of the API for easier lifetime management.
- Import the
sharedTexturemodule.
const { sharedTexture } = require('electron');- Create a
BrowserWindowand enable the offscreen rendering feature, setting the target window size.
const osr = new BrowserWindow({
show: true,
webPreferences: {
offscreen: {
useSharedTexture: true
}
}
});
// It is recommended to use `setSize`, as setting width and height above in BrowserWindow may be constrained by the screen size.
osr.setSize(3840, 2160)- Listen for the
paintevent, import the shared texture, and get a transfer object to pass to the renderer process.
The handle is a SharedTextureHandle, which contains platform-specific information about the native handle.
osr.webContents.on('paint', async (event) => {
const texture = event.texture;
if (!texture) {
return;
}
// Import the external shared texture
// `textureInfo` has same properties as `importSharedTexture` required,
// for importing your own native shared texture, fill corresponding
// properties by yourself.
const imported = sharedTexture.importSharedTexture({
textureInfo: texture.textureInfo,
// This will be called when all references of the imported shared
// texture are released. When this is called, you can safely release
// the source native texture.
allReferenceReleased() {
// All references are released, release the source texture.
texture.release()
}
});
// Use the util method to transfer to the renderer process
// This has a timeout to prevent your renderer being dead or not having
// registered receiver with `sharedTexture.setSharedTextureReceiver`.
await sharedTexture.sendSharedTexture({
frame: win.webContents.mainFrame,
importedSharedTexture: imported
});
// You can release here as we're done using it in the main process.
// The receiver will need to call `release()` at the end of their usage.
// The underlying resource will keep alive until all holders called
// `release()`, and references are managed automatically to track when all
// holders are released.
imported.release();
});- In
preload.js, expose an API to the main world and receive the transfer object.
const { sharedTexture } = require('electron');
const { contextBridge } = require('electron/renderer');
contextBridge.exposeInMainWorld('textures', {
setSharedTextureReceiver: (cb) => {
sharedTexture.setSharedTextureReceiver(async (data) => {
// Provide the imported shared texture to the renderer process
await cb(data.importedSharedTexture);
});
}
});- In
renderer.js, prepare a canvas and a WebGPU rendering pipeline.
WebGPU Rendering Pipeline Initialization Code
// Import WebGPU utilities
const canvas = document.createElement('canvas');
canvas.width = 128;
canvas.height = 128;
canvas.style.width = '128px';
canvas.style.height = '128px';
document.body.appendChild(canvas);
const context = canvas.getContext('webgpu');
const initWebGpu = async () => {
// Configure WebGPU context
const adapter = await navigator.gpu.requestAdapter();
const device = await adapter.requestDevice();
const format = navigator.gpu.getPreferredCanvasFormat();
context.configure({ device, format });
// Create a function to render VideoFrame
window.renderFrame = async (frame) => {
try {
// Create external texture
const externalTexture = device.importExternalTexture({ source: frame });
// Create bind group layout, specifying the external texture type
const bindGroupLayout = device.createBindGroupLayout({
entries: [
{
binding: 0,
visibility: window.GPUShaderStage.FRAGMENT,
externalTexture: {}
},
{
binding: 1,
visibility: window.GPUShaderStage.FRAGMENT,
sampler: {}
}
]
});
// Create pipeline layout
const pipelineLayout = device.createPipelineLayout({
bindGroupLayouts: [bindGroupLayout]
});
// Create render pipeline
const pipeline = device.createRenderPipeline({
layout: pipelineLayout,
vertex: {
module: device.createShaderModule({
code: `
@vertex
fn main(@builtin(vertex_index) VertexIndex : u32) -> @builtin(position) vec4<f32> {
var pos = array<vec2<f32>, 6>(
vec2<f32>(-1.0, -1.0),
vec2<f32>(1.0, -1.0),
vec2<f32>(-1.0, 1.0),
vec2<f32>(-1.0, 1.0),
vec2<f32>(1.0, -1.0),
vec2<f32>(1.0, 1.0)
);
return vec4<f32>(pos[VertexIndex], 0.0, 1.0);
}
`
}),
entryPoint: 'main'
},
fragment: {
module: device.createShaderModule({
code: `
@group(0) @binding(0) var extTex: texture_external;
@group(0) @binding(1) var mySampler: sampler;
@fragment
fn main(@builtin(position) fragCoord: vec4<f32>) -> @location(0) vec4<f32> {
let texCoord = fragCoord.xy / vec2<f32>(${canvas.width}.0, ${canvas.height}.0);
return textureSampleBaseClampToEdge(extTex, mySampler, texCoord);
}
`
}),
entryPoint: 'main',
targets: [{ format }]
},
primitive: { topology: 'triangle-list' }
});
// Create bind group
const bindGroup = device.createBindGroup({
layout: bindGroupLayout,
entries: [
{
binding: 0,
resource: externalTexture
},
{
binding: 1,
resource: device.createSampler()
}
]
});
// Create command encoder and render pass
const commandEncoder = device.createCommandEncoder();
const textureView = context.getCurrentTexture().createView();
const renderPass = commandEncoder.beginRenderPass({
colorAttachments: [
{
view: textureView,
clearValue: { r: 0.0, g: 0.0, b: 0.0, a: 1.0 },
loadOp: 'clear',
storeOp: 'store'
}
]
});
// Set pipeline and bind group
renderPass.setPipeline(pipeline);
renderPass.setBindGroup(0, bindGroup);
renderPass.draw(6); // Draw a rectangle composed of two triangles
renderPass.end();
// Submit commands
device.queue.submit([commandEncoder.finish()]);
} catch (error) {
console.error('Rendering error:', error);
}
};
};
initWebGpu().catch((err) => {
console.error('Failed to initialize WebGPU:', err);
});- Obtain a
VideoFramefrom the imported shared texture and render it. After submitting the WebGPU command buffer, you can callcloseon theVideoFrame.
window.textures.setSharedTextureReceiver(async (imported) => {
try {
// Get VideoFrame from the imported texture
const frame = imported.getVideoFrame();
// When it's done using, release the imported shared texture.
imported.release();
// Render using WebGPU
await window.renderFrame(frame);
// Release the VideoFrame as it is no longer needed.
// You can `close()` the frame after the `imported` called
// `release()`. The resource will be automatically managed.
frame.close();
} catch (error) {
console.error('Error getting VideoFrame:', error);
}
});This managed API will automatically handle IPC and reference counting for users. When all holders in different processes called release(), the resource will be automatically released. However, if advanced user wants to access raw APIs, they can use subtle property to access them. For more details, please take the test as an example.
The managed API is designed to be easy to use for most users. It automatically handles IPC and reference counting for users, which is a pain for users to manually manage.
- It automatically passes sync tokens through Electron IPC, with an async API.
- It automatically counts references across processes, and detects if there's a leak.
- It provides a clearer API for users to use, and is easier to understand.
The main goal of the current implementation is to import external shared texture descriptions while reusing as much of the existing Chromium infrastructure as possible to minimize maintenance burden. SharedImage is chosen as the underlying holder of the external texture because it provides SharedImageInterface::CreateSharedImage, which accepts a GpuMemoryBufferHandle containing native shared handle types: NTHANDLE for Windows, IOSurfaceRef for macOS, and NativePixmapHandle with file descriptors for each plane for Linux. However, during research, several critical problems led to the final design described below.
On Windows, a shared D3D11 texture can be created by GetSharedHandle (deprecated) or CreateSharedHandle. The deprecated method generates a non-NTHANDLE that can be globally accessed, while the newer one generates an NTHANDLE that is local to the current process. To share it with other processes, you need to call DuplicateHandle to create this handle in the remote process.
On macOS, IOSurface can also be global by setting kIOSurfaceIsGlobal, which is now deprecated. To share an IOSurface with other processes, you need to create a mach_port from it and pass the mach_port through a previously created IPC (which also uses mach_port as transport), making it significantly more complex than on Windows.
Given these obstacles, you must ensure that when calling sharedTexture.importSharedTexture, the handle is already available to the current process. You might be able to use a global IOSurface, but a non-NTHANDLE is not an option as described in problem 2 below.
In fact, Chromium's IPC internally handles all these concerns (duplicating handles for remote processes, passing mach_port through mach_port), which is why the OSR paint event can use the handle directly in the main process, even though the original handle is generated in the GPU process—because IPC has transparently handled these issues.
Chromium takes ownership of the GpuMemoryBuffer and its native representation. Once the resource is destroyed, the handle will be closed. Thus, the handle the user imports must allow Chromium to take ownership.
On Windows, calling CloseHandle on a non-NTHANDLE is invalid and will cause Chromium to crash. Therefore, you cannot use a non-NTHANDLE when importing. In the future, we may provide a helper for this. To work with this design, when calling importSharedTexture, an NTHANDLE will be duplicated internally for the user, meaning the user still has ownership of the handle.
On macOS, IOSurface is a reference-counted resource. When calling importSharedTexture, instead of taking ownership, Chromium can simply retain this resource and increment the reference count, so the user still retains ownership.
Initially, I considered using the WebGPU Dawn Native API to import the external texture as a WGPUSharedTextureMemory, but encountered several problems. For example, it was unable to export, difficult to manage the lifetime of a frame, and working with WebGPU was non-ideal.
SharedImage has advantages for sharing across processes because it holds a reference to a Mailbox, which points to the corresponding SharedImageBacking in the GPU process. Therefore, I use SharedImageInterface::ImportSharedImage and ClientSharedImage::Export to serialize sufficient information to retrieve the SharedImage reference in another process, and it can also reuse the mojo serializer to serialize as a string. Everything is handled by Chromium.
Lifecycle management is a bit tricky. When you get a SharedImage holding a Mailbox from either ImportSharedImage or CreateSharedImage, the Mailbox is just a placeholder; it doesn't ensure the GPU has actually imported the texture and increased the reference count to the Mailbox. Chromium doesn't wait for the GPU to finish, typically using a SyncToken to ensure GPU task dependencies. For example, releasing a SharedImage is also an asynchronous operation, and calling release often requires a SyncToken that acts as a barrier to ensure all GPU importing and rendering tasks are finished. In our case, since we can't use a mojo callback to update the SyncToken across processes, we use callbacks to notify dependent resources to release.
The final challenge is managing the lifetime of a frame. Currently, I use OSR to get an exported shared texture generated by Chromium itself. As the documentation states, the texture needs to be manually released. By importing this texture into a SharedTextureImported in Electron, we must ensure the imported one is released before the source texture is released.
What's more challenging is that the paint event occurs in the main process, while rendering must occur in renderer processes, so we must use startTransferSharedTexture to pass the underlying SharedImage to another process.
Most GPU calls are asynchronous, sent to the GPU process's command buffer through the GpuChannel of each client process. When we have two SharedImage instances referencing the same Mailbox in two different processes, we don't know when the GPU has finished using the resources. Typically, this is guaranteed by a SyncToken. For example, we can schedule the destruction of a SharedImage with an empty SyncToken in the main process (where the texture was first imported), but when we use the same SharedImage in the renderer process and use it in WebGPU (or WebGL) pipelines, the destruction token will be generated by WebGPU to prevent destruction before the GPU uses it. The destruction won't occur until WebGPU rendering finishes.
Ideally, if we were in Chromium, we could use mojo to create a PendingRemote callback and update the main process destruction SyncToken to prevent the main process from releasing the frame before the GPU starts working on it. In the future, we may implement this in Electron code to wrap the mojo functionality. Eventually, I found a way to use gpu::ContextSupport and register a callback when a specific SyncToken is released (signaled). When you call release() on the imported shared texture object, if you've used VideoFrame and imported it into a WebGPU pipeline, it will wait for WebGPU to finish rendering, then run a callback to notify you to release dependent resources, such as the original imported object in the main process, the source texture, etc.
Making a copy of the source texture at import time was also considered, but it's not ideal. First, it introduces an extra copy and doubles GPU memory usage. Second, performance suffers if we make the user wait for the copy to complete. Thus, we choose not to provide a copy option and encourage users to listen for the allReferenceReleased callback to release the source texture.
The subtle property exposes raw APIs to the advanced users. If subtle APIs are used, the lifecycle of objects must be carefully managed:
- First, obtain an external shared texture, in this example from offscreen rendering, referred to as
originTexture. - Import
originTextureto getimportedTextureMain. - Obtain a
transferObjectfromimportedTextureMainand send it through the IPC channel. - Caution: At this point, you cannot release
importedTextureMainas the import in the other process is not yet complete. - Receive the
transferObjectin the renderer process and obtainimportedTextureRenderer. - Caution: At this point, you also cannot release
importedTextureMain, as GPU tasks are asynchronous. AllimportedTextureobjects are backed by aMailbox, which has a cross-process reference counter in the GPU process. The counter will not increase until the command buffer actually runs theimportedTextureRendererimport task. Thus, if you releaseimportedTextureMainnow, the counter may reset to zero, causing the GPU to destroy the mailbox and making the future import task fail. - Obtain a
VideoFramefromimportedTextureRenderer, use it in WebGPU, and create anExternalTexture. - After submitting the command buffer, all GPU tasks are submitted. You can safely call
closeon theVideoFrame, as the command is guaranteed to execute after all submitted tasks, ensured by aSyncToken(explained in detail later). - For similar reasons, you can then call
releaseonimportedTextureRenderer, as all tasks in the renderer process are complete. Pass a callback to thereleasefunction, which uses aSyncTokeninternally to ensure the callback is triggered after all submitted tasks are executed by the GPU, making it safe to release further resources. - Now, notify the main process to release
importedTextureMain. It is recommended to also use a callback to safely releaseoriginTexture.
This feature mainly depends on the SharedImage API in Chromium.
Chromium's SharedImage may have minor API changes in the future. For example, VideoFrame might eventually manage the lifetime of the underlying SharedImage automatically (which is actually more ideal), but this could introduce minor maintenance burdens.
This design is nearly optimal, as it requires no upstream patches and already takes into account all Chromium internal design constraints.
Besides using VideoFrame, one possible alternative is to use WebGPU directly, which would require significant patches to Dawn and Chromium, and was therefore abandoned.
Regarding lifecycle management, I considered passing callbacks like MessagePort does. After some research, although it is possible to create an IPC channel using the Blink MessagePortDescriptor, I found this does not simplify things much:
- You still have to ensure the remote process has finished calling
finishTransferSharedTexture(that's when it transmits a new SyncToken back), before you can drop the reference or call release. It's still an asynchronous procedure. - You still don't know when to release the original native handle. A callback is still needed to notify a safe release.
Therefore, I decided not to use automated SyncToken IPC, and to continue using the release callback to provide a precise and safe timing for manually releasing dependencies.
Previously, users could import external images via video streams or bitmaps, which consumed CPU and resulted in visual loss. By using shared textures, we can import GPU resources directly, which is a much more efficient way to display arbitrary content in web applications.
- How to handle native shared handles: can we provide this feature in sandboxed renderer processes?
- Are there any other web standards besides
VideoFramethat can be used to render the texture?
- Performance: It would be meaningful to profile the overhead of managing the lifecycle through Electron IPC callbacks.
- Provide a utility method to let users import a global
IOSurfaceor global D3D11HANDLE. - Since we can import a shared texture as a
SharedImage, it may be possible to implement OSR usingSharedImageinstead ofFrameSinkVideoCapturer.
