Shader Resources and Render Targets


So last week I established a general overview of how the graphics system is organized and major tenants of its design. One major aspect is how each graphic object is considered a “feature” that is supported by the render system. With that idea in hand, we can see how an implementation is built from separate blocks of functionality that, in general, are independent of one another. The most common thing between them all are the IRenderSystem and IRenderContext interfaces. So we have a mechanism to grow functionality and support different API’s or platforms that may have varying capabilities and feature support.

For this week, allow me to dive further down the rabbit hole and talk more about resource organization. Inherently, the graphic resource objects that have implementation factories are heavyweight; it takes three different objects to create them – a “domain” non-abstract object, its implementation, and its implementation factory. E.g. VertexBuffer as the “domain” object, IVertexBufferImpl as the implementation interface, and IVertexBufferImpl as the implementation factory. I use the “domain” term because this is the object you’re going to be interacting with 100% of the time in your application and it’s something that represents a “feature”. As far as you’re concerned as an application developer, the other stuff doesn’t exist, since they’re mechanisms behind the scenes. They are necessary of course, if you’re writing your own implementations, or extensions to an existing implementation.

The new graphic system also introduces two concepts that are interface-only: IShaderResource and IRenderTarget. All resources are now treated in a unified manner, and by having these as pure interfaces, they allow for lighter non-domain graphic resources. I use the non-domain term since these resources are managed entirely by the implementation and not the user.

Example: The user asks for a subresource from a texture array. He does not create this resource explicitly, therefore does not control the lifetime of the resource. But it’s still a resource nonetheless, and must be treated in the same way as other user created resources.

The basic idea with resources is, how a resource is bound to the pipeline is strictly an implementation detail. All you care about is that it happens, not how it happens (declarative vs imperative API). This allows for the proliferation of different resource types.

Shader Resources

The old engine design followed XNA’s Effect implementation rather closely. Really, the only “resource” an effect was able to set was Texture (which included render targets). Now, a “shader resource” takes its inspiration from the Resource Views from Direct3D10/11. Any object, whether it’s a GraphicResource or not, can be a shader resource, as long as it implements the IShaderResource interface. Currently all texture types do, the stream output buffer, and the sampler state.

        public interface IShaderResource {
            ShaderResourceType ResourceType { get; }

There are two ways of setting a shader resource: indirectly via an effect parameter or directly by slot index via a common shader stage in the render context.

Since the treatment of shader resources is now universal and resource-type neutral, effect parameters do not have a dedicated “SetTexture”. There are no more hard coded methods saying “set your structured buffer or texture here”, as it’s now merely an implementation detail. The beauty of this, of course is, for an XNA implementation, the shaders it supports does not support a structured buffer. Nor will its render system offer that implementation factory. So that feature transparently goes away if you’re using an XNA feature set. The API does not have a glaring “MISSING FUNCTIONALITY HERE” gap. It’s a pet peeve of mine where an API has functionality that you think works, because its there, but in fact doesn’t really. The engine doesn’t work like that.

For directly setting shader resources to a render context, I have expanded and generalized the texture/sampler state collections for Vertex/Pixel shaders that were on the old renderer interface. That setup was very XNA-like. Now we’re following the common shader core paradigm – a separate shader stage for every shader type. Each shader stage object can be queried from the render context, and each stage has a number of shader resource and sampler state slots. If a shader stage is not supported (e.g. GeometryShader in XNA), that stage would not be present. This is very similar with how implementation factories or render extensions are setup, where a feature can be queried to determine if it’s supported or not. So for an XNA implementation we would only have a Vertex/Pixel shader stage available, but for the Direct3D11 implementation, all the shader stages of the modern pipeline would be present.

        public interface IShaderStage {
            int MaxSamplerSlots { get; }
            int MaxResourceSlots { get; }

            void SetSampler(SamplerState sampler);
            void SetSampler(int slotIndex, SamplerState sampler);
            void SetSamplers(params SamplerState[] samplers);
            void SetSamplers(int startSlotIndex, params SamplerState[] samplers);

            void SetShaderResource(IShaderResource resource);
            void SetShaderResource(int slotIndex, IShaderResource resource);
            void SetShaderResources(params IShaderResource[] resources);
            void SetShaderResources(int startSlotIndex, params IShaderResource[] resources);

            SamplerState[] GetSamplers();
            IShaderResource[] GetShaderResources();

The eventual goal with the Direct3D11 render system is to support all resource types such as structured buffers and UAVs for compute shaders. Compute shader support would be a render extension (the Dispatch methods) rather than method declarations on the IRenderContext interface itself. So the whole organization, as I’ve mentioned, lends itself to be able to support different pieces of logic or graphic resources that may exist in one implementation but not another, and do so cleanly without impeding other render system implementations.

So far I’ve only touched on GraphicResource derived objects, the domain “heavyweight” objects of the graphics system. What about the “lightweight” objects I mentioned? I briefly touched on this already. We see these in direct use with the new texture array objects. A texture array represents the entire array resource, but they have methods to get at the sub resource at each array slice.

        public interface ITexture2DArrayImpl : ITexture2DImpl {
            int ArrayCount { get; }

            void GetData<T>(IRenderContext renderContext, IDataBuffer<T> data, int arraySlice, int mipLevel,
                                        Rectangle? subimage, int startIndex, int elementCount) where T : struct;
            void SetData<T>(IRenderContext renderContext, IDataBuffer<T> data, int arraySlice, int mipLevel, 
                                        Rectangle? subimage, int startIndex, int elementCount, DataWriteOptions writeOptions) where T : struct;

            IShaderResource GetSubTexture(int arraySlice);

Notice it returns an IShaderResource instead of an ITexture2DImpl or Texture2D. This is because the implementation doesn’t have to, since we only care if it’s a resource that can be bound to a shader. The object returned may not necessarily be another “heavyweight” graphic resource, and in my implementations they never will be. It merely can be a handle/pointer to some implementation detail that the domain graphic resource controls. I went this route in the Direct3D11 implementation, where it is a lightweight wrapper around a ShaderResourceView.

This actually lends itself to further interface definitions in the Direct3D11 implementation, interfaces that contain getters for different resource views, as we only care about if the object has a view that we can bind to the pipeline or not, nothing else. This also allows for developers to better integrate code additions to the Direct3D11 implementation. New functionality can be added from a separate DLL, without having to modify and recompiling the engine.

So yep, you can actually override different resource implementations from the standard Direct3D11 implementation if you so chose. I wouldn’t though; it’s mainly a mechanism for adding new functionality.

The general rule of thumb for the Direct3D11 implementation is, when a GraphicResource implements the IShaderResource interface, its implementation then should implement the ID3D11ShaderResource interface, which has a getter to get a ShaderResourceView. The lighter objects you get from GetSubTexture(int arraySlice) will always implement the ID3D11ShaderResource interface.

Render Targets

Render targets have also been unified under a common interface, IRenderTarget, which also inherits from IShaderResource. The old engine design, again, followed XNA where there were two types of render targets – 2D and Cube. Because XNA did not have geometry shader support, you could only bind one face of a RenderTargetCube at any time. So the method to set a render target to the device, took in a RenderTargetBinding, a structure that just wrapped either a RenderTarget2D or RenderTargetCube. This was not sustainable in the new engine design, since it is allowable to bind the entire cube, or bind a texture array as a single render target. Or even have a 3D render target resource (I just today read that MonoGame apparently has added such a resource type).

So to me, it was very natural to further make use of the whole “view” concept we find in Direct3D10/11. Rather than binding a specific face of a cube resource, we would be simply talking about sub resources. It could be an array slice, it could be a cube face, it could be the entire texture array or cube, and it doesn’t matter. The IRenderTarget interface completely describes any type of render target resource (if it’s an array, cube, etc) as well as allows for a sub render target to be queried.

        public interface IRenderTarget : IShaderResource {
            IDepthStencilBuffer DepthStencilBuffer { get; }
            DepthFormat DepthStencilFormat { get; }
            SurfaceFormat Format { get; }
            int Width { get; }
            int Height { get; }
            int Depth { get; }
            int MipCount { get; }
            int ArrayCount { get; }
            bool IsArrayResource { get; }
            bool IsCubeResource { get; }
            MSAADescription MultisampleDescription { get; }
            RenderTargetUsage TargetUsage { get; }

            IRenderTarget GetSubRenderTarget(int arrayIndex);

The render context has Set/Get methods for render targets that take in the new render target interface rather than the old binding structure (which no longer exists). Just like how I described shader resources, there’s a specific Direct3D11 implementation only interface called ID3D11RenderTarget that has a getter for the render target view.

There is one caveat for the RenderTargetCube, for a render system where setting the entire cube target is supported, the domain graphic resource represents the entire cube when it’s set to the render context. But in an implementation like XNA, it defaults to the first face. So for that implementation, setting the actual graphic resource is semantically identical to asking for the first sub render target. Most likely, if you’re running on a platform that has no geometry shader support, you’re going to be utilizing each cube face target separately anyways, so it’s largely a minor issue that should go unnoticed. It’s a necessary fallback mechanism in my opinion, and one that is documented.

In Closing

I hope this sheds more light on the general organization of the graphics system that I started with last week’s post. As I wrote this, I was thinking to myself, “gee, to an outside viewer, this may all seem fairly simple and straight forward, not hard at all”. To that I say, good. I like non-convoluted APIs that are simple, concise, and straight forward. There’s a certain elegance in simplicity. Sometimes getting to that level of conciseness can be anything, but simple however.

Next week: More on the content pipeline!

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>

Unable to load the Are You a Human PlayThru™. Please contact the site owner to report the problem.