Sure I understand, I imagine DSA isn't a hardware restriction for it. The Radeon 5870 is a good low end target these days and from the latest drivers it halfway supports OpenGL 4.5 already, so if it had DSA and some future vendor neutral extension for command buffers it would greatly simplify writing modern OpenGL applications. In fact it would create a nice interim playing field of having to only support OpenGL4.4 + DSA + Hopeful-Command-Buffers, and Vulkan. And of course greatly help everyone's transition towards Vulkan.
Re: Direct State Access on Radeon 5870
Re: Direct State Access on Radeon 5870
To answer your question explicitly, it is in the plan. I cannot saywhen, because software release schedules are notoriously fickle in this industry, and I do not want you to rely on uncertain information. But, yes, it will be supported in an upcoming driver release.
Re: Direct State Access on Radeon 5870
Ok that's good to know it's at least planned! Thank you.
Re: Direct State Access on Radeon 5870
Don't know if it helps, but, even though GL_ARB_direct_state_access isn't supported in current drivers (15.4b), GL_EXT_direct_state_access works just fine and I haven't had any compatibility issues with it (yet).
I have a basic DSA wrapper that uses the appropriate function calls depending on the build (4.4/4.5), and aside from passing a target parameter to functions and settling for glGen* functions instead of the shinny glCreate* ones, it doesn't stop me from writing a 4.5-like codebase.
R9 290 + EXT_DSA and 760 + ARB_DSA produce the same results so it might be worth giving it a try.
Re: Direct State Access on Radeon 5870
Thanks that's good info the EXT appears to be working - I didn't realise that existed. And knowing that the ARB one will likely will be supported on our minimum spec Radeon 5870 means I can take the risk to wrap that as you say to remove later when the drivers expose the ARB version.
I imagine if the Radeon 5870 supported OpenGL 4.5 fully without performance issues, and then they made a 4.6 that it supported, that depreciated all the non DSA to error that would be a great boon to developers confusion!
Re: GLSL compute shader incompatibilities
I tried with the size (such as 12), It is all OK.
Please be careful to check if the target, format or something other is wrong in you application, it should not be "size" problem.
The issue of "atomicCompSwap" will be fixed in next release.
Enjoy you journey to AMD opengl .
Re: SAMPLE_ALPHA_TO_ONE shows no effect
Hello again,
Does anyone have any kind of information about this issue ? Is this the right place to post about it ?
After several more tests, we are now highly confident that this is a driver issue. We have tested several AMD cards of different generations, with the respective latest drivers, and every single one shows this issue. The respective Piglit tests (sample-alpha-to-one tests) all consistently fail. Nvidia cards are not affected.
This is a problem for us. We're using the feature in a deferred image based LOD system. While we have a partial workaround, it is neither free (requiring an additional GBuffer pass), nor is it fully effective. The results are noticeable image quality differences between AMD and NV cards, which is hard to explain to our customers. I'm also having a hard time understanding how a GL core feature can be broken for so long. Surely it must have been flagged during AMDs internal QA / compliance testing ?
Any information would be highly appreciated.
Thanks,
Alex
SSBOs in geometry shader
Hi !
When i try to access ssbo from geometry shader(i pass draw id from vertex shader) then shader cannot link:
Geometry shader(s) failed to link. Geometry link error: HW_UNSUPPORTED. ERROR: Internal compile error, error code: E_SC_NOTSUPPORTED Geometry shader not supported by hardware
|
I'm running on 7850. Is it really doesn't support ssbos in geometry shader???
Re: SSBOs in geometry shader
Okay guys, i figured out a problem.
Geometry shader:
layout(location = 0) in VertexData {
vec2 texCoord;
flat int drawId; // from vertex's gl_DrawIDARB
} vert_in[];
struct ObjectData {
bool someValue;
};
layout(std430) readonly buffer MyBuffer {
ObjectData perObject[];
};
...
void main() {
if(perObject[vert_in[0].drawId].someValue) {} // causes an error
}
If i write
int id = vert_in[0].drawId;
ObjectData data = perObject[id];
if(data.someValue) {}
error now dissapears
Seems amd driver bug.
Also i have another one problem.
I want to pass drawId from vertex shader to geometry shader and then to fragment shader.
If i'm using my perObject in vertex shader all is ok. Then i'm passing drawId to geometry shader all is ok too. But when i try to access perObject in fragment shader my application crashes. On nvidia all is working normally.
It's another amd driver bug. Is there any driver developer to investigate and fix it?
I really love and production but there bugs are annoying to me.
Re: Re: GLSL compute shader incompatibilities
I tried once again and found that it is more an issue of the size specified when clearing the buffer than when creating it. Here I paste your modified code which doesn't work for me.
unsigned int bo[8]; unsigned char data[32 * 16]; glGenBuffers(8, bo); glBindBufferBase(GL_SHADER_STORAGE_BUFFER, 0, bo[0]); glBindBuffer(GL_SHADER_STORAGE_BUFFER, bo[0]); glBufferData(GL_SHADER_STORAGE_BUFFER, 32 * 16, &data[0], GL_DYNAMIC_COPY); glGetBufferSubData(GL_SHADER_STORAGE_BUFFER, 0, 12, &data[0]); checkGLError("GetBufferSubData"); glClearBufferSubData(GL_SHADER_STORAGE_BUFFER, GL_RGB32UI, 0, 12, GL_RGB_INTEGER, GL_UNSIGNED_INT, 0); // Works when using size 16 instead of 12, but 3 * 4 = 12 checkGLError("ClearBufferSubData"); glGetBufferSubData(GL_SHADER_STORAGE_BUFFER, 0, 12, &data[0]); checkGLError("GetBufferSubData");
The code reports INVALID_VALUE error after the ClearBufferSubData call.
Please, check my code again. If needed, I can send you a whole project for MSVC 2012 (using Freeglut and GLEW), it is only a simple test suite for the case.
Thank you in advance!
Re: glGetProgramResourceiv() not working in OpenGL 4.3 / 4.4
Although the question is almost a year old, I discovered it in a Google search for glGetProgramResourceiv and thought I'd post a response in case others search and find this page. I had run into the same problem on my AMD 7970 with an earlier driver. With a call to glGetError, I got GL_INVALID_ENUM. The offending item was GL_LOCATION_COMPONENT. The same code worked on an NVIDIA card. I repeated the test now with
vendor = ATI Technologies Inc.
renderer = AMD Radeon HD 7900 Series
version = 4.4.13283 Compatibility Profile Context 14.501.1003.0
and the call succeeds with GL_LOCATION_COMPONENT. However, I get a similar error on AMD with my current driver (but not on NVIDIA) with glGetProgramInterfaceiv(handle, GL_TRANSFORM_FEEDBACK_BUFFER, GL_ACTIVE_RESOURCES, &numResource). The error is GL_INVALID_ENUM.