Of new and old

Community Forums General Discussion Of new and old

This topic contains 17 replies, has 2 voices, and was last updated by  Abe Hamade 10 years, 10 months ago.

Viewing 15 posts - 1 through 15 (of 18 total)
  • Author
    Posts
  • #175

    Abe Hamade
    Participant

    I must say, I am impressed ! The new method of account access is ingenious! Take that you forum spammer bots! I am glad someone out there made a forum base that is actually useable. A ways back I looked into the exact same thing and simply didn’t find a solution I cared for. They were bloated and not well documented or simply overkill for a simple function. Kudos to Genesis Framework! I will definitely be looking into that.

    Now Starnick, you have been talking about the new engine now for some time. Yes I may not be around much (actually a good thing since work is busy again! go go economy!), but I still keep up on my interests. When can we expect to see it? Parallelism in the content pipeline !?! I am drooling over here.

    Are you still going to maintain on Google Code? I am currently looking into a good SVN, preferably one where I host it locally (Work Related, not Personal), and I keep coming up empty handed. Much like a good forum base, a good SVN is critical. Lately I seen a trend to people going to GitHub, but honestly, I don’t care for it. Codeplex is too M$ for me as well. What are you thoughts on that?

    -Abe Hamade

    #177

    Starnick
    Keymaster

    Thanks Abe. The forum software is BBPress, but it works nicely with the Genesis Framework. If you haven’t noticed the entire site, including the forums, is responsive. BBPress required a few tweaks to get that working, although for the large part it comes compatible out of the box. Still need to integrate the responsive dokuwiki template into the site.

    And yep, the Task-Parallel Library (TPL) is being used in several places of the Content pipeline. Notably, the content manager has the following additions:

    • Task<T> LoadAsync<T>(String, ImporterParameters)
    • Task<T> LoadRelativeToAsync<T>(String, IResourceFile)

    To make this possible, the resource importers are now all stateless. Also using the TPL in several other places too (e.g. writing out external ISavables for the binary exporter). Additionally, the whole content pipeline has gotten a lot of design love in respect to ISavable and also filesystem abstraction.

    The ISavable writers/readers now have a concept of Shared/External (much like the XNA pipeline). Shared means a single instance (e.g. a vertex buffer) that is shared within the ISavable object hierarchy, it’s an indexed listed that is saved alongside the primary ISavable object. External savables are file references, e.g. a texture or animation file that is relative to the ISavable object. These external files are written out with their own handlers, so it may be another binary object, or a material, or a custom format.

    For the file system, resource locators have evolved into IResourceRepository which abstracts any repository of content. That may be a folder on disk, it may be a remote URL, it may be a ZIP file or similar archive. So there’s a concept of not just location now, but of opening/closing a connection to the repository, and enumerating resource files that handle opening a stream to the piece of content.

    Right now I’m engaged with the graphics system (low-level part), namely with resources and shaders for the moment. The goal is to keep the graphics system thread-safe too. There are two big changes:

    IRenderSystemProvider is now IRenderSystem, it’s a service that represents the entire graphics subsystem, namely for creation of GPU resources. That part hasn’t much changed, but the creation process has. The render system now is a repository of implementation factories, so new factories can be added in the future, and different render systems may support only a subset of GPU objects. Each GPU object has its own implementation and implementation factory that the render system uses to initialize the GPU object. You can think of it along the lines of feature levels in D3D10/11. Canonical example: A WPF interop Texture2D that has a custom Texture2D implementation factory…an OpenGL render system would not have this registered, but the D3D11 render system would. Also, the object creation is thread safe.

    IRenderer is no longer a catch-all interface. It has always been the “Device” as well as a high level renderer, now the functionality is split. IRenderContext is now the “device” as in D3D11, and it contains all the low-level drawing functionality you’d expect. There can be more than one context in a multi-threaded scenario, following D3D11′s example. The higher level stuff (render queues, the camera, and material-engine properties variable binding) will be in the renderer as before. So what this means is, the renderer is purely a high-level concept, its what you create to form the core of your app/game rendering. Example: A Forward Renderer vs a Deferred Renderer, so that high-level logic is separate from simple stuff like setting vertex buffers/drawing indexed, etc.

    My goal is to offer simple implementations of both. The Deferred Renderer I’ll be writing anyways for a project that’s coming later in the year :) .

    Anyways, that’s some tidbits. I really should post a blog post, but the last 2-3 weeks my time has been limited to the weekends, and I’d rather make some more headway with the code. I do want to start a weekly “Friday Feature” spotlight, I actually have a post about the new changes in the Math library written up…just needs some editing and formatting.

    As for google code, I’m pretty sure I’ll be staying there. I use google code + SVN for my other project, AssimpNet. GitHub does seem to be all the rage these days, but honestly I’m more comfortable with SVN. If I was working on a project with lots of other developers, Git is a good idea, but that’s not my situation. The only benefit of using Git in my case would be because its in fashion and shiny! I plan on moving the Tesla v1 code to a branch on the google code page to make way for v2, so it’ll be the same repository!

    #179

    Starnick
    Keymaster

    Also a really cool new feature in the content pipeline are IPrimitiveReader/IPrimitiveWriter.

    Basically, the original ISavableReader/Writer has been broken out into two interfaces. A “Primitive” is a struct type that implements IPrimitiveValue, an interface that the entire math library implements.

    This was necessary early on because the Math library has received an explosion of new types (mostly vectorization of bools and ints for use with shaders). Writing those struct types to some output, or even in the effect framework would be burdensome because I’d have an explosion of methods in an interface! I really wanted something simple, straight forward…and generic. Thus the primitive concept was born (this also applies to some of the IDataBuffer changes and the snazzy IL injection intero-generator stuff…but I’ll save that for another day).

    Basically an IPrimitiveValue is a lightweight savable. An ISavable is considered always a class, and like it, the primitive value also has methods to write/read itself to/from some output/input. Simple idea, but makes life really easy and simple. The readers/writers can handle:

    • All C# primitives + String values. Single value + Array
    • Write<T>(String name, ref T value) (incl. non-ref overload)
    • Write<T>(String name, T[] values)
    • WriteNullable<T>(String name, T? value)
    • WriteEnum<T>(String name, T enumValue)

    One thing that was removed was the 2D array writing/reading. Honestly, never had a use for these, and if you ever need that functionality, it’s easy enough to do yourself.

    #181

    Abe Hamade
    Participant

    That’s some amazing changes coming !! Sounds like the new render system will lead way to an easily implemented WPF interop. Are you still planning on changing the backend core over to SharpDX? I love SharpDX, its amazing. Plus, I think its much more active then SlimDX as far as development goes. From what it sounds like, the rendering system can easily support any backend just needs to be coded to support the interface? I am interested in seeing how assests are handled. Are you still planning to use the script based system from previous? One thing XNA did well was its management of content at design time. Integrates well with development. Scripting is much more versatile, and just syntax checking etc are lacking at design time and requires external access.

    I am glad your staying with Google Code, I like it as well. I am still trying to determine which SVN to use for local only development applications. Any thoughts there?

    As far as the website goes, its is amazingly fast. I do like it a lot. Google auth is rocking. Those little things are what many other sites lack. I personally hate having to maintain 10 million passwords and user accounts.

    You should totally do a spotlight on whatever it is your working on. I would love to keep up with what your working on.

    I truly am excited to see 2.0 in action, can’t wait!!

     

    #183

    Starnick
    Keymaster

    Yep, we’re on SharpDX, for a long time now actually. Although not quite as complete as the D3D10 SlimDX implementation in v1 right now. It’s getting there. That reminds me too…like SharpDX (and AssimpNet), Tesla v2 is AnyCPU. Btw, if you haven’t I would check out AssimpNet, it’s stand-alone from the engine, and has gotten wide use so far, including the SharpDX toolkit and MonoGame. Quite proud of that! Also that project has sometimes been a delayer of Tesla v2, but its a worthy excuse at least.

    And yes, the graphics interfaces are designed for different backends. While I’m clearly focused on D3D11 and take design cues from it, I plan on targeting OpenGL 4.x. I’ve gotten my feet a little wet with programming with OpenTK for that reason and usually keep tabs with the OGL official documentation. Although I have been promising that even in v1, heh. Maybe next year? I think the XNA implementation is dead at this point, though.

    Ultimately, with non-D3D11 backends (whether it be OpenGL 4.x, or something earlier to support other devices), the idea is that backend will effectively be “mapped” onto D3D11-style concepts. Some things may or may not have to be emulated (e.g. thread safety), but as far as missing features go, that’s where the implementation factories come into play. A lot like feature levels as I said…idea is to have some level of consistency, where the big variety is in terms of performance. The IGraphicsAdapter interface, as in the first engine incarnation, handles queries for GPU resource limits/capabilities.

    I don’t have plans right away for something like XNA’s content project, although the idea of having MSBuild tasks for compile-time content is doable. The SharpDX toolkit does this pretty well, so at some point I’d love to offer that. Mostly because the content pipeline is intended for compile-time processing of assets, so it makes sense. This is a must for non-PC platforms like the Windows Store (e.g. shaders must be pre-compiled). But I always liked the option of at least being able to process content at runtime too, so that hasn’t changed. I’m quite happy with the overall design of Tesla’s content serialization, and in more ways than one, its a more striking and much needed feature than the graphics system. Everyone’s engine does graphics!

    As for scripting, I assume you’re talking about the material files (TEM)? That’s not going away, it’s a highlight of the engine! There will be some syntax changes in the upcoming version though, mostly in regards to render state changes. Although keep in mind, that’s the only scripting we have – its essentially a custom content format, not a script thing for the whole pipeline. A Material can be serialized into a binary object just like any other ISavable, too.

    #199

    Starnick
    Keymaster

    Heh. So kicked off the Friday Feature series (on a Sunday). Thought I’d start with the Tesla.Interop stuff.

    #208

    Abe Hamade
    Participant

    Hey don’t get me wrong, I wasn’t saying the TEM files were a bad thing, they are actually a great thing. I was actually talking about design time support for materials like in XNA content management. Having both methods is a good thing. You nailed it when you talking about other platforms. Also, not sure if you are aware, there is an Open Source implementation of XNA. Not saying its something you need to implement, just that having your engine designed the way you do, implementation of any adapter wouldn’t be too much work.

    Now time to check out the Friday Sunday Feature :)

    #210

    Starnick
    Keymaster

    Yeah the XNA implementation is dead…but not necessarily a MonoGame one.

    But then you’re still stuck with D3D9-level features, in exchange for quick multi-platform support. There are a lot of new post-D3D9 features coming such as geometry shader support and (real) texture arrays. I still want to support those on OpenGL too.

    I should also mention that an end-goal is to be Mono compatible. I still haven’t yet too much exposure to Mono yet though. You can say AssimpNet is my testbed for that, as its a whole lot closer to Mono support than Tesla v2 at the moment.

    #212

    Abe Hamade
    Participant

    Yes, I agree, that the D3D9 features of XNA are its Achilles Heal, not sure if MonoGame will support D10/D11. I thought I read somewhere that they will. I only just came across is last night actually.

    Switching gears on you here, this whole interop method being employed is actually code replacement with optimized code? This is something I used to do back in the Amiga programming days (albeit for a completely different reason then your using it for now). I will dive more into AssimpNet to see how its implemented.

    Good Stuff so far Starnick!!

    #214

    Starnick
    Keymaster

    I’m not sure about that, maybe they’ll have an extension, but the point of such an API is to mimic the official XNA one in order to ease porting. Plus, it allows you to take an XNA game that can run on Xbox, and now easily compile it against MonoGame to work on other platforms. So changing the API would hinder that. Also, as far as I understand things, they still have some parts of their API missing. Heh, probably the time I get to that point with Tesla, things will be very different anyways.

    As for the generator, not so much optimized code (certainly can think of it that way though), just code that cannot be compiled from C#, but stuff that can be compiled if you’re writing a C++/CLI interop assembly. You should also be able to accomplish it if you’re creating dynamic methods too (System.Reflection.Emit), but the point is to do it at compile-time and keeping things simple.

    Basically, it’s one of those bits that sets SharpDX apart from SlimDX. SlimDX is written in C/C++ to interact with the native Direct3D interfaces and to do any of the interop. SharpDX is AnyCPU and pure C#, where most of its interop (and interactions with Direct3D) is generated at compile time. Maximize portability while also maximizing performance is simply downright sexy.

    Up until recently, I had the compiled Tesla.Interop.Generator.exe up on the AssimpNet trunk, but it was compiled for .NET 4.5 which caused issues for some people when compiling the library on WinXP, targeting .NET 2.0. So now the source (single file, that I linked in the blog post) is included in the project. But really, you can take that, create your own .exe and use it on any other assembly, provided that assembly has the InternalInterop stub class as it looks for those method calls (much like how the SharpDX tool operates). It’s really that easy…and the generator will also sign the assembly if you give it a key.

    I opted to integrate the interop stuff directly into the AssimpNet core as I didn’t need the raw buffers, nor did I want end-users to have too many dependencies (keeping it simple – much better to work with a single AssimpNet.dll, since they would not be expecting a “Tesla.Interop.dll” dependency). But when I do eventually release Tesla v2, that option is there.

    #216

    Starnick
    Keymaster

    FYI, I do believe reading on the SharpDX forums that the library can be compiled against the Mono framework. Granted, its still Windows or Windows-device due to the nature of Direct3D, but that’s something you can’t even do with SlimDX since Mono does not support C++/CLI.

    #218

    Abe Hamade
    Participant

    I know we went completely off topic here, but now this Interop functionality has me dazzled. Correct me if I am wrong here, but what its doing is directly injecting byte code that got compiled in some sort of other dll or library of such inplace of existing code and yet still to be compiled and verified @compile time ?

    #220

    Starnick
    Keymaster

    If I read that right, yes. It’s taking an already compiled assembly, and injecting hand-written IL code at strategic points, then saving the assembly (and potentially signing it) back to the disk. So it’s not running anything through the compiler, as there’s nothing to compile. Unfortunately that means the hand written IL code isn’t exactly verified for correctness, its what you see is what you get.

    So the build steps would be like:

    1. Compile <your assembly>.dll

    2. Execute Tesla.Interop.Generator.exe taking the path to <your assembly>dll as input (optional path to define key file)

    3. Out comes the patched <your assembly>.dll to wherever you specified to save it

    Nothing to do with Csc except for the initial first step (obviously).

     

    I’d definitely look up more information on Mono.Cecil as its a pretty powerful library. Lots of interesting uses, even for non-interop…e.g. strip out certain bits of functionality or add new functionality to an existing assembly, without recompiling the original source code. It’s used by a lot of the Mono tools (e.g. their debugger).

    E.g. the .NET 2.0 build of Mono.Cecil internally declares an extension attribute, so if you use it post-2.0 you’ll get a compile warning. You can actually use Mono.Cecil, to strip that attribute from Mono.Cecil in order to remove that compiler warning, heh.

    #222

    Abe Hamade
    Participant

    Ok so I obviously need to read more about how this works. You say the Tesla.Interop.Generator.exe takes only one real argument–the dll to perform its operation on. So where does it get the replacement code from? The Handwritten IL code is what I am referring to. Which I am believe should be able to be compiled elsewhere for correctness? I saw Mono.Cecil. I can read more there since it sounds like Tesla.Interop.Generator.exe does the same thing? Not sure there and I cant imagine it would since why would you write a tool for something that already exists, unless you wanted to customize it. Can you point me into the direction for more information on Tesla.Interop.Generator.exe ? I would like to play with it and to better get an understanding of how it works. Sounds like tons of fun!

     

    #224

    Starnick
    Keymaster

    Oh. The hand-written IL code is hardcoded in the generator, if you take a look at the source:

     

    https://code.google.com/p/assimp-net/source/browse/trunk/AssimpNet.Interop.Generator/Program.cs

     

    Starting at about line 181, there are methods for each stub to either replace the method call or replace the method body outright (as described in the blog post), those contain the IL that get injected. So yeah, the generator is very, very specific in what it does. Although, you should get a feel for how to iterate over the data in the assembly (in effect, its scanning -every- type and method in an assembly looking for method calls to InternalInterop. It’s not any different than using the reflection Emit functionality in .NET, if you’ve ever used those before. Just emitting the IL opcodes…

Viewing 15 posts - 1 through 15 (of 18 total)

You must be logged in to reply to this topic.