Talk:PRIME
PRIME GPU OFFLOADING
The section "PRIME GPU OFFLOADING" is in my opinion a collection of solutions outdating each other. I came from bumblebee and optirun, that stopped working with nvidia 440.31 and tried my luck using the approach here. Following the linked hints in the Notes of this section were already outdated and I found help in the readme of the current nvidia driver version: https://download.nvidia.com/XFree86/Linux-x86_64/440.31/README/primerenderoffload.html
How about either linking to that article or putting a snippet of working xorg-config plus which pkgs to use in that section?
—This unsigned comment is by Bollie (talk) 09:25, 26 November 2019 (UTC). Please sign your posts with ~~~~!
The --provideroffloadsink option is still accurate for Mesa drivers. Not sure about the proprietary driver, but feel free to add more informations with links to the sources. Lekensteyn (talk) 23:08, 1 December 2019 (UTC)
PRIME render offload
It's true that editing files under /usr/share/ will not survive package upgrades, but I didn't find any means to make it work that doesn't involve editing the file usr/share/X11/xorg.conf.d/10-nvidia-drm-outputclass.conf to remove the PrimaryGPU option. Either I could come up with a pacman hook (hacky as hell) or make a warning about it in the wiki. Right now it seems that depends on configuration alone and forcing that default configuration from the package doesn't seem sane. A minimum correct 10-nvidia-drm-outputclass.conf file should look like this:
usr/share/X11/xorg.conf.d/10-nvidia-drm-outputclass.conf
Section "OutputClass"
Identifier "nvidia"
MatchDriver "nvidia-drm"
Driver "nvidia"
ModulePath "/usr/lib/nvidia/xorg"
ModulePath "/usr/lib/xorg/modules"
EndSection
Samsagax (talk) 19:45, 9 December 2019 (UTC)
- Either a xorg.conf file or a /etc/X11/xorg.conf.d snippet have precedence over /usr/share/X11/xorg.conf.d. Please remove that section from the prime render offload, it's not necessary. I'll make some changes later this week too, and I might remove that. —This unsigned comment is by Grazzolini (talk) 17:22, 10 December 2019. Please sign your posts with ~~~~!
- I have created a package for this setup called nvidia-prime. It comes with a script and a xorg.conf.d snippet. During my tests, I have found out that using it and without commenting the PrimaryGPU option on the 10-nvidia-drm-outputclass.conf, I got a reverse prime setup by default, without any /etc/X11/xorg.conf or /etc/X11/xorg.conf.d snippet, which means that, because of the PrimaryGPU option, X would use the NVIDIA card for everything. If I comment out that option, I get the prime render offload setup. I'm going to discuss this with the nvidia-utils maintainer and see if we can either remove that snippet entirely, or at least remove the PrimaryGPU option. Grazzolini (talk) 00:08, 11 December 2019 (UTC)
- I think removing the option and every other that doesn't add or impose a setting to the user is the way to go. As general rule, there should only be a sane default that won't interfere with user coniguration or at least back it up. About the precedence, if what you say it's true, then adding a snippet under
/etc/X11/xorg.conf.dwith the optionPrimaryGPUset to "no" should do the trick something like:
- I think removing the option and every other that doesn't add or impose a setting to the user is the way to go. As general rule, there should only be a sane default that won't interfere with user coniguration or at least back it up. About the precedence, if what you say it's true, then adding a snippet under
etc/X11/xorg.conf.d/10-nvidia-drm-outputclass-primary-no.conf
Section "OutputClass"
Identifier "nvidia"
Option "PrimaryGPU" "no"
EndSection
- I'll try to test this tonight. Aside from this, I think the whole section about PRIME offload should be rewritten. I can help with that and my findings on setting up NVIDIA proprietary driver specifically. Samsagax (talk) 17:01, 11 December 2019 (UTC)
- Yes, setting it up on /etc/X11/xorg.conf or /etc/X11/xorg.conf.d should have precedence over /usr/share/X11/xorg.conf.d. I have opened FS#64805 for tracking this and I'm talking with the current maintainers, svenstaro and felixonmars. In addition to dropping the PrimaryGPU option, we should also drop the modesetting configuration for the intel card from that file also, because it means that even if you have xf86-video-intel installed, it won't be used unless you force it with a xorg.conf or xorg.conf.d snippet. Basically it's interfering with normal Xorg autodetection. It also makes Xorg.wrap to fail and start X as root by default. Grazzolini (talk) 20:08, 11 December 2019 (UTC)
- I have tested this as well, by copying the 10-nvidia-drm-outputclass.conf from /usr/share/X11/xorg.conf.d to /etc/X11/xorg.conf.d and both removing the PrimaryGPU option and setting it to "no" as well. Neither worked. The only solution is for that file to drop the PrimaryGPU option, indeed. Grazzolini (talk) 02:31, 13 December 2019 (UTC)
Hi, new here, proposing an edit to: "As per the official documentation, it only works with the modesetting driver over Intel graphics card." I have a working setup with an Intel HD Graphics 620 using the Intel driver and Nvidia Geforce 940MX using the Nvidia driver (in an ASUS S510UQ laptop); I've confirmed this with xrandr --listproviders. Perhaps change to "...it only works with the modesetting driver, but success has been had with the Intel driver instead..."? Irradium (talk) 01:20, 12 January 2020 (UTC)
- Edited as appropriate to previous statement. Irradium (talk) 22:10, 14 January 2020 (UTC)
Official PRIME solution
I have found the solution here to work pretty well when applied, and optimus-managerAUR with default config and hybrid mode fixes the lack of a video output in configs that exhibit it.
I wonder if the wiki page could be updated to reflect that? --TheSola10 (talk) 12:01, 18 December 2019 (UTC)
- The documentation states that: This feature requires a Turing or newer GPU. It can't really be used for all cards. You are welcome to edit the page to add info about this dynamic power management, but there's nothing "Official" about using optimusmanager. Grazzolini (talk) 12:23, 18 December 2019 (UTC)
Prime and Wayland
I might be mistaken, but I believe that Reverse Prime is not possible with the current (470.57.02-1) driver.
I think that it would be nice to clarify the situation regarding Prime and Wayland globally
Pums974 (talk) 10:04, 21 July 2021 (UTC)
Configure applications to render using GPU
This section has examples to run an application offloaded to the NVIDIA GPU with Dynamic Power Management enabled using the environment variables __NV_PRIME_RENDER_OFFLOAD=1 __GLX_VENDOR_LIBRARY_NAME=nvidia. prime-run can also be used for this purpose and is a convenient wrapper around the provided command (can be verified by running cat $(which prime-run)). So, that command can also be mentioned as can be seen in the section External GPU#Xorg rendered on iGPU, PRIME render offload to eGPU, where both commands are mentioned RaZorr (talk) 17:36, 16 March 2024 (UTC)
Under https://wiki.archlinux.org/title/NVIDIA# which sections we have to follow
Section 1.2 Closed-source drivers, says to follow NVIDIA# for installation, but do we also have to follow 1.3, 1.4 and 1.5? And what about the rest of the section 2,3,4,5, and 6? Phoenix324 (talk) 17:03, 29 June 2024 (UTC)
Article should be rewritten with focus on Wayland - what works and what doesn't
Almost everything in this article is based on an Xorg setup, so it's really hard to figure out what works and what doesn't in a pure Wayland setup. Mads (talk) 12:18, 22 April 2025 (UTC)
- What "wayland" are you talking about? There is no "wayland".
- There's a bunch of wayland compositors which will all have varying support for random stuff, incl. prime. The general approach is addressed in the article, specific caveats would likely be better addressed in the articles of the specific wayland compositor?
- What might or not be useful was a red/yellow/green feature matrix (unsupported / see caveats in the compositor article / works OOTB) Seth (talk) 15:21, 22 April 2025 (UTC)
HDMI not working after using 3.7 section environment variable
The ArchWiki should mention that setting EGL_VENDOR_LIBRARY_FILENAMES=/usr/share/glvnd/egl_vendor.d/50_mesa.json and __GLX_VENDOR_LIBRARY_NAME=mesa in /etc/environment can affect the use of the HDMI port controlled by the NVIDIA dGPU. In my case, this caused GNOME and GDM to freeze when HDMI is plugged, and it also created freezing issues on my internal laptop screen (only when the HDMI port controlled by the NVIDIA dGPU is plugged) Maxence (talk) 08:31, 24 August 2025 (UTC)
- This has nothing to do w/ HDMI (as much as I enjoy shitting on that protocol) - using an (any, HDMI, DP, VGA, …) output on the nvidia GPU and RTD3 are mutually exclusive conditions, the premise in that paragraph doesn't apply to you. Seth (talk) 08:51, 24 August 2025 (UTC)
- Yes, sorry — I should have been clearer. It’s obviously related to all outputs controlled by the NVIDIA GPU (in my case, just an HDMI port). However, I disagree with the second part of your statement. For example, when I'm not using any output from my NVIDIA GPU, I want it to enter the
- D3cold
- state. But when i use one, i want the NVIDIA GPU to wake up from D3Cold, I find that this section of the Arch Wiki could be clearer. It would help to add something like:
“This will prevent the use of your NVIDIA GPU by process using EGL and GLX libraries, but it could cause issues if you intend to use the output from your NVIDIA card.”
- Or something like that. What’s in the wiki isn’t wrong, but I think it would be clearer to specify the implications it will have. Maxence (talk) 09:29, 24 August 2025 (UTC)
using an output on the nvidia GPU and RTD3 are mutually exclusive conditions
- Whether you agree with that or not: you do one OR the other, even if you intend to sequence one AND the other.
- But personally I don't mind (i didn't add that paragraph) if you want to stress that in the wiki.
- Hacking around the nvidia driver will certainly preclude its usage (which is why in the referenced BBS the user then alters the environment to allow processes to access the nvidia GPU, what you want the display server to be able to do to drive an output)
- Have you tried to instead of exporting the environment to the display server (globally) only from the display server (for subsequent program calls as they seem to be what inadvertently wakes the GPU) Seth (talk) 12:37, 24 August 2025 (UTC)
Enabling NVIDIA Runtime Hybrid Power (Thinkpad P51)
I posted this guide on the arch linux forums, it got binned because it was a "PSA announcement" style topic, and I was told that it was better suited here.
Trying to figure out where I can post this to make it accessible for others having the same issue, or how to merge the overlap into the main page here.
Background:
I'm running Arch Linux on a ThinkPad P51 with a Quadro M1200 (Maxwell architecture).
Ever the perfectionist, my goal was as follows: To enable true Hybrid Power, ensuring the NVIDIA card stays completely powered down (suspended) and only wakes up when specifically called by an application.
System Specs:
- Host: Lenovo ThinkPad P51 (20HH000NUS)
- Kernel: Linux 6.18.6-arch1-1
- GPU: NVIDIA Quadro M1200 Mobile (Maxwell)
- NVIDIA Driver: 580.126.09 (Proprietary Legacy Branch)
Symptoms / Issues Encountered:
- Package Conflicts:Standard nvidia-dkms or nvidia-open-dkms failed. You MUST use the proprietary closed-source modules for Maxwell
- The Audio Anchor: The NVIDIA HDMI Audio device (01:00.1) kept the GPU in a D0 power state, physically blocking the GPU from sleeping.
- High Power Draw: Idle consumption was ~25W-30W; with the fix, it is ~11W.
Solution Steps:
- Driver Installation (The Specific Sequence) If you have conflicting nvidia-utils preventing an install, you must force-remove them before installing the specific 580xxlegacy branch. Note: Ensure you have linux-headers installed so the DKMS module can build. Force remove conflicting utility packages:
sudo pacman -Rdd nvidia-utils lib32-nvidia-utilsInstall the specific legacy driver branch and prime-run This branch is required because 590+ dropped support for Maxwellyay -S nvidia-580xx-dkms nvidia-580xx-utils lib32-nvidia-580xx-utils nvidia-prime - Kernel & Udev Configuration Enable Fine-Grained Power Management in /etc/modprobe.d/nvidia-pm.conf:
options nvidia "NVreg_DynamicPowerManagement=0x02"Allow the kernel to manage power for both the GPU and its Audio controller in /etc/udev/rules.d/80-nvidia-pm.rules:ACTION=="add", SUBSYSTEM=="pci", ATTR{vendor}=="0x10de", ATTR{class}=="0x030000", ATTR{power/control}="auto"ACTION=="add", SUBSYSTEM=="pci", ATTR{vendor}=="0x10de", ATTR{class}=="0x040300", ATTR{power/control}="auto" - The "Manual Reset" (The "Secret Sauce") If the card is still stuck in active after a reboot, use this sequence to force it to respect the new rules:
echo "0000:01:00.0" | sudo tee /sys/bus/pci/devices/0000:01:00.0/driver/unbindecho "auto" | sudo tee /sys/bus/pci/devices/0000:01:00.1/power/controlecho "0000:01:00.0" | sudo tee /sys/bus/pci/drivers/nvidia/bind
Optional: Quality of Life (How to use it)
Quick Status Monitoring
Add an alias to your shell config (config.fish or .bashrc) to check your GPU power state instantly:
alias gpu-status="cat /sys/bus/pci/devices/0000:01:00.0/power/runtime_status"
Launching Apps with dGPU
To run heavy apps (Games, CAD, Video Editors) on the NVIDIA card, use:
prime-run [application_name]
To make an app like Discord or VS Code always use the dGPU via its shortcut:
cp /usr/share/applications/discord.desktop ~/.local/share/applications/
sed -i 's/^Exec=/Exec=prime-run /' ~/.local/share/applications/discord.desktop
The Results (Proof of Concept)
Using upower to monitor energy-rate while on battery, tested by launching Discord:
- GPU Suspended (Idle/Web Browsing):~11.6 W
- GPU Active (dGPU Wake): ~25.1 W
- GPU Under Load (dGPU Stress): ~32.8 W
Power Savings: ~14-20W reduction in idle drain.
Useful Resources:
Arch Wiki: NVIDIA Power Management
Arch Wiki: PRIME Render Offload
Troubleshooting Note: If your gpu status is stuck at active run the following:
nvidia-smi
If you see Discord or a browser listed (at the bottom of the "Processes" section), that app is holding the GPU awake. Close it, and the GPU should suspend within 5 seconds.
Personal note:
This experience has radicalized me into wanting a Framework laptop for the GPU upgradability.
The root issue here is that NVIDIA recently EOL'd Maxwell support in the new 590+ driver branch. Since Arch's main "nvidia" packages moved to 590, P51 users are left in a lurch where the default drivers effectively break power management or refuse to load. Moving to the 580xx-legacy branch and manually configuring the udev/audio stack is currently the only way to keep these workstation beasts viable on modern Arch.
As always, I hope this helps some other poor soul in need :D Tested on animals, they didn't understand it either. (talk) 22:07, 23 January 2026 (UTC)