I haven’t written much in the last 3 weeks, almost entirely because I’ve been super busy at work. To the point where my days and weeks are kind of blending together while I work on awesome stuff . Also, I’m putting the Friday Features on an extended leave until September. But I’ve managed to eke out some time to push forward the Direct3D11 graphics implementation for the engine. I’m happy to say that I have a great deal of it done. And I have an interesting story to relate on that matter.
The engine has always formed the basis for almost all of my projects, because most of my projects are always either graphics or heavily math based. It feels great to have a go-to library that you’ve wrought with your very own hands. A library that you know intimately inside and out. I am very iterative in how I approach software development, at times I feel its akin to constructing something like a Pyramid or Stonehenge (stay with me here). I have an end goal in mind, a design plan to get there, it may just be some notes on a scrap of paper or full blown class diagrams and use case visualization (I go through so many notepads its not funny). With that starting point and direction, I always feel like I’m “hoisting” up the framework or skeleton of some structure. Or a lumberjack bootstrapping himself up a tree. Its one step at a time, making ever growing progress, each step bringing you closer to the end goal. Watching the architecture of whatever software you’re developing start to take shape and organically grow and evolve, all with your design intent is a rather fascinating and rewarding spectacle. It isn’t without it’s hard work of course.
Anyways, what I’m trying to say is I doubt Tesla will ever truly be finished. It serves one purpose – to be useful to me, and by being useful to me it can be useful to other people just as well. That is why I’ve invested so much time in unglamorous engine areas as the math library, or the content pipeline. It also means that I practice eating my own dog food…Tesla is as Alpo as it gets. Take for an example, the new IDataBuffer interface that I talked about some weeks ago. The Direct3D11 implementation development has really exercised that design greatly. And as a result, the interface has changed somewhat.
While (re)writing the SetData/GetData operations for the resources (Textures, Index/Vertex buffers) I noticed a very cumbersome pattern was taking root when working with data buffers. Since I’ve moved to an unmanaged “raw buffer” backing store rather than a managed array, I had some data buffers that allowed for pointer access. Others did not (the DataBufferArray, which is essentially the original implementation from the first engine design). This posed a problem with writing/reading to resources, since I wanted to utilize copying memory directly from mapped pointers (rather than say, creating a data stream each time). I suspect writing to resources is going to be a lot faster than before because of this, although I don’t have any metrics to back that up, just an eyeballing based on how we’re working with the data (writing directly to a mapped subresource databox pointer or taking our data buffer pointer and using that in UpdateSubresource).
But since that raw pointer to our data buffer may not always exist, code ended up being a little bit “spaghetti” where you would have two code paths, one utilizing the pointer in a nice, clean straight forward way, and the other creating a temp byte buffer to copy bytes around. The latter only because the managed array would not be exposed at the interface level, so making assumptions that it was a DataBufferArray<T> would be well…rather downright smelly (as in code smell). Then it dawned on me – why not take inspiration from mapping Direct3D11 subresources? Simple. Elegant. Clean.
Actually, I’m surprised I didn’t think about that in my first design iteration of the IDataBuffer interfaces. For our “raw buffer” based data buffer, it would be a simple act of returning the underlying pointer. For our managed array based data buffer, it would be pinning the managed array, which would be unpinned with the Unmap call. The design of “Do you support pointer access” was wrong, because it didn’t allow for a mechanism for managing the pointer and left the client up the river without a paddle. An “oh crap, we don’t support it…now what do I have to do again? Oh great, some boiler template code” moment if you will. A design of “Give me access and I’ll tell you when I’m done” alleviates that problem, less code *you* as a client has to write, the better. So the following changes have been introduced to IDataBuffer:
bool IsMapped { get; } MappedDataBuffer Map(); void Unmap();
Now this is the part that gets interesting. MappedDataBuffer that gets returned when you map the buffer, is a struct that implements IDisposable. It looks something like this (trimmed down for brevity):
public struct MappedDataBuffer : IDisposable { public bool IsValid { get { return m_dataBuffer != null && m_dataBuffer.IsMapped; } } public IntPtr Pointer { get; } public int SizeInBytes { get; } public IDataBuffer DataBuffer { get; } public void Dispose() { if(IsValid) m_dataBuffer.Unmap(); } }
The reason why this is interesting is, then we can write the following code:
using(MappedDataBuffer dbPtr = dataBuffer.Map()) { //Do work - copy bytes }
Nifty, right? Very succinct, you declare your intent that you want to map the data buffer for reading/writing. You get your pointer (direct access pointer or address of a GC handle for the managed data buffer variant). Do your thing, and then at the end of the day it’s automatically unmapped. And as we all (should) know, the using statement here is really just some syntactic sugar for a try-finally construct. And it’s a hell lot more elegant than writing the try-finally (always good practice for doing map/unmap operations, by the way) like so:
try { IntPtr ptr = dataBuffer.Map(); //Do work - copy bytes } finally { dataBuffer.Unmap(); }
Another little tidbit of coolness that may not be appreciated here is that this also doesn’t generate garbage. One may think that because the struct is being treated as an IDisposable, we’ll run into boxing. This is a valid assumption, but the C# compiler makes an optimization here where that won’t be the case. I direct you to the great Eric Lippert’s (now former MS) blog. I also took a peek for myself at the compiled IL code. Also worth mentioning is that the MappedDataBuffer construct has implicit operators to cast to an IntPtr, as well as addition operator overloads for doing pointer arithmetic.
In conclusion, it’s a snazzy feature that makes working with any IDataBuffer a snap, if you want to have fast, convenient pointer-based access to its contents. It trumps the previous design because now we have consistent and uniform access across the board, with the implementation entirely hidden and largely irrelevant to the client. It has made some of my code cleaner when interoping with SharpDX, and that’s something I love when it happens! It’s also a great example of refining your own designs, making them better, after actually testing them out in the field!
Leave a Reply