Big OpenGL performance drop when porting from fixed-function to shaders

Big OpenGL performance drop when porting from fixed-function to shaders

Andrew McDonald's picture

I have some fixed-function OpenGL code that I'm upgrading to use shaders. I've done the code work, but am seeing a huge performance drop. The frame rate in a typical scene has halved. There are still some parts of the scene using the fixed-function pipeline though. Are there any obvious gotchas that could account for this?

Off the top of my head, the code looks comparable. I've replaced the fixed-function code which used one directional light, ambient and diffuse only, for a simple vertex shader with per-vertex directional ambient/diffuse lighting, and a trivial pixel shader with one texture lookup modulated withthe interpolated colour. Previously the code called glInterleavedArrays; now I have three calls to glVertexAttribPointer instead (not using vertex array objects.) The vertex & index data are stored in buffer objects. The rendering looks identical.

The test machine is an Intel Clarkdale Core i5-660 running Windows 7 Ultimate x64 SP1, using the Clarkdale GPU. Driver version is 8.15.10.2559. I don't see the same issue on another GeForce test machine, so I think it's something to do with Intel's GPU or drivers.

8 posts / 0 new
Last post
For more complete information about compiler optimizations, see our Optimization Notice.
Deepak Vembar (Intel)'s picture

Hi Andrew,
Would you have the test application to share with us to replicate the issue and test out? Have you tried some of the other applications with shaders to see if you are getting the same issue?
Thanks
-deepak

Andrew McDonald's picture

Thanks for the reply Deepak (bit late though, it would be great if Intel could monitor these forums more closely!)

I can't share code as it's for a full game. But by chance another user had posted about an issue that sounds the same, around the same time I did:

http://software.intel.com/en-us/forums/showthread.php?t=102479&o=a&s=lr

He had made a sample to demonstrate it, so could you try testing that? If not I'll try to make a sample of my own.

Sergey Kostrov's picture
Quoting Andrew McDonald I can't share code as it's for a full game. But by chance another user had posted about an issue that sounds the same, around the same time I did:

http://software.intel.com/en-us/forums/showthread.php?t=102479&o=a&s=lr

[SergeyK] Where did you see a "...sample to demostrate..."?

He had made a sample to demonstrate it, so could you try testing that? If not I'll try to make a sample of my own.

Itwould be nice to have a test case.

Andrew McDonald's picture
Quoting Sergey Kostrov Where did you see a "...sample to demostrate..."?

Most of the post is about that. The test scene with the sphere, in the screenshot...? The code and binary weren't actually posted, but I'm assuming he'd be up for sharing it with Intel if they asked.

Andrew McDonald's picture

Hi, is there any progress on this? Was 'survivorx' able to share his test case?

Deepak Vembar (Intel)'s picture

Hi Andrew,
We do not have the test application yet. We will follow up to get it andreplicate it on our systems.
Thanks
-deepak

adamredwoods's picture

I'd like to add that I have similar results on Dell Inspiron with Intel G41 Express Chipset with the latest drivers 8.15.10. I've tested a basic shader using OpenGL's GLSL and the vertex shader performance is slow. It seems as if the shader is being processed on the CPU. I've compared the exact shader on Google Chome's WebGL and saw high frame rates, similar to fixed-function pipeline rates. I'm dealing with about 70,000 vertexes, which is no problem in the fixed-function pipeline, but performance is degraded on GLSL (from 150fps to 22fps). I've tried changing the 3D settings in the display control panel, but there is no noticeable effect. the only immediate solution I can think of is to use the Angle openGL replacement library, which converts opengl commands to directx, or to use a non-Intel video card.

Login to leave a comment.