Nvidia GPUs do not support DX12 Asynchronous Compute/Shaders

This all came about due to Oxide’s upcoming DX12 RTS, Ashes of the Singularity. When they released the benchmark, various places found that AMD GPUs had a massive performance gains (partly DX11 driver overhead, partly because GCN’s ACEs are doing nothing on DX11), but Nvidia GPUs actually do worse in DX12.

Nvidia fired some pretty fine words at Oxide, dissing their game as not representative of DX12 games, and claiming they had a MSAA bug. Oxide fires back saying the bug is actually in Nvidia’s drivers and offered to help them fix it.

They made a nice blog about DX11 vs DX12 where they clarified it and that they were not out to gimp any hardware, but they play fair by the DX12 book.

This escalated on the tech forums, getting heated accusations thrown around, and so Oxide came into the discussion with this bombshell:
Maxwell doesn’t support Async Compute, at least not natively. We disabled it at the request of Nvidia, as it was much slower to try to use it then to not.

Followed up with this:
Personally, I think one could just as easily make the claim that we were biased toward Nvidia as the only ‘vendor’ specific code is for Nvidia where we had to shutdown async compute. By vendor specific, I mean a case where we look at the Vendor ID and make changes to our rendering path.

Curiously, their driver reported this feature was functional but attempting to use it was an unmitigated disaster in terms of performance and conformance so we shut it down on their hardware. As far as I know, Maxwell doesn’t really have Async Compute so I don’t know why their driver was trying to expose that.

I suspect that one thing that is helping AMD on GPU performance is D3D12 exposes Async Compute, which D3D11 did not. Ashes uses a modest amount of it, which gave us a noticeable perf improvement. It was mostly opportunistic where we just took a few compute tasks we were already doing and made them asynchronous, Ashes really isn’t a poster-child for advanced GCN features.

Our use of Async Compute, however, pales with comparisons to some of the things which the console guys are starting to do. Most of those haven’t made their way to the PC yet, but I’ve heard of developers getting 30% GPU performance by using Async Compute.

And finally this, they basically challenge Nvidia to prove them wrong.
There is no war of words between us and Nvidia. Nvidia made some incorrect statements, and at this point they will not dispute our position if you ask their PR. That is, they are not disputing anything in our blog. I believe the initial confusion was because Nvidia PR was putting pressure on us to disable certain settings in the benchmark, when we refused, I think they took it a little too personally.
It looks like Oxide is angry at NV since NV tried to make them look like fools despite the problem being with their hardware, so Oxide took it personally and go public.

NVIDIA claims “full support” for DX12, but conveniently ignores that Maxwell is utterly incapable of performing asynchronous compute without heavy reliance on slow context switching.

GCN has supported async shading since its inception, and it did so because we hoped and expected that gaming would lean into these workloads heavily. Mantle, Vulkan and DX12 all do. The consoles do (with gusto). PC games are chock full of compute-driven effects.
Looks like this is escalating, time to prepare the popcorn and see NV’s response!

As to why Async Compute/Shaders are so important in DX12 & future cross-platform games:

  1. Compute is used for global illumination, dynamic lighting, shadows, physics, post-processing (including even AA). If it can be offloaded from the main rendering pipeline and done asynchronously in parallel, it can lead to major performance gains. As such, GPUs that support it will see major performance uplift and in theory, GPUs that do not support it, will have no benefit, it reverts back to the normal serial rendering of graphics & compute.
  2. Async Shaders are vital for a good VR experience, as it helps lower latency of head movement to visual/photon output. I posted on this topic awhile ago:


For those curious about upcoming DX12 games,

Mirror’s Edge : Feb 2016

Rise of the Tomb Raider: Q1 2016

Deus Ex: Mankind Divided Feb 2016

Hitman (Same game engine as Deus Ex: MD, from Squaresoft).

Star Wars Battlefront (latter this year, DICE have said Vulkan or DX12).

Fable (2016), Lionhead Studios have showcased it alongside AMD at E3, with a focus on Async Compute features.

Leave a Reply

Your email address will not be published. Required fields are marked *