OpenGL interoperability

OpenGL interoperability

Intel's OpenCL drivers provide the GL interoperability flag so I'm assuming that it really wants to provide easy access to OpenGL objects via their OpenCL implementation. However this seems to be slightly bugged.

Everything goes smoothly and without complaints until the texture object is used after calling clEnqueueReleaseGLObjects and clFinish(queue). Opengl throws GL_INVALID_OPERATION when calling glUseProgram. Does your implementation touch anything else than the texture which was accessed via clCreateFromGLBuffer? The code works just fine with ati and nvidia opencl implementations, and we've checked the OpenCL spec quite closely that we are quite likely not doing anything wrong there.

7 posts / 0 new
Last post
For more complete information about compiler optimizations, see our Optimization Notice.

Thanks for reporting the issue.

In our implementation we assume that GL objects are acquired/released in the same thread as GL context was created.

In general, there are some issues in GL compatability between different vendors. In order to find solution to yourproblem please provide us with: Sample code, Gfx card vendor and your driver version.

Thanks for the swift reply.

I'm working with quite large existing 3d engine so I am unable to provide useful OpenGL related code. Basically there is quite a large amount of existing textures and shader programs and whatnot. The engine is not multithreaded.

The CL code is:

glFinish();
clError = clEnqueueAcquireGLObjects(queue, 1, &image_from_OpenGL , 0, NULL, NULL);
...
enqueueNDRangeKernel etc. The image is accessed in kernels using write/read_imagef with sampler const sampler_t sampler=CLK_NORMALIZED_COORDS_FALSE | CLK_ADDRESS_CLAMP_TO_EDGE | CLK_FILTER_NEAREST;
The texture itself is simple RGBA-8888 texture.
...
clError = clEnqueueReleaseGLObjects(queue, 1, &image_from_OpenGL, 0, NULL, NULL);
clFinish(queue);

The GPU I'm using is HD5700 series from AMD with 10.12 drivers (not the latest but recommended for the APP SDK).

The sequence you use is very basic, and i don't expect any issue with it.

I can think about two potential issues:
1. Writing outside texture boundaries. Writing outside the image can cause unexpected results on CPU. Please remove image_write() and see if problem still exist.
2. Using texture format not supported by our SDK. What is the texture format you are using.

Is it possiable to share binaries?

The code runs well when the image is created simply with clCreateImage2d (output verified by writing the image into png after processing).

The texture format is GL_RGBA8 which corresponds into CL channel order CL_RGBA and channel data type CL_UNORM_INT8 which were used to create the non opengl version of the image (the buffer used in calculations is also of the same type). As an additional note the image size is 512x512 so it's nice power of 2 texture.

As a quick test I did remove the image writes (and reads) from the kernel and it had no effect. I also removed the kernel calls altogether and it still crashes. It seems that simply calling enqueueacquire/enqueuerelease produces this problem, even if nothing is actually done with the buffers.

Unfortunately I am unable to share any binaries at this moment.

I'll take everything back. Thanks for notifying me about the texture format. By digging inside the engine I noticed that we use just GL_RGBA for everything, thus the driver may select the actual size (eg GL_RGBA16). By forcing the internal format into GL_RGBA8 everything works like a charm.

So this was our bad. Sorry for wasting your time and congratulations on making suprisingly good OpenCL implementation.

You are welcome, my pleasure was to help you.

Glad to hear it works for you now.

Login to leave a comment.