Hello,
Here is a comment about images in mantle.
Now in mantle the textures are ultimately composed of 3 layers of objects:
1) memory object
2) image object, which is a descriptor (metadata)
3) view object, which is another descriptor (metadata)
I was wondering why is this double-descriptor model necessary? It looks like redundant and over-engineered.
Consider that the views in dx were used to allow the same a piece of texture to be treated differently while avoiding the very expensive copying/duplicating it because the texture objects there owns it's memory.
Now i don't think it was a good api even there because there are simpler ways to do the same (as is in opengl) instead of adding more object types and make the api cumbersome.
(Please consider that each time a developer wants to use a simple static texture, he also needs to create a view too, which is pointless and i consider it bad api design)
But in mantle even isn't that case. The images there are simple descriptors (a few bytes that tell size and format) so why on earth do we need 2 layers of descriptors, one over another?
If one needs to re-use the same texture/target/whatever with different format/usage/whatever, lets just create a new image object and attach the same memory object to it.
Maybe I miss something?