Swap interval N cause 1/N fps except if glGetError is called


Hi Everybody, :slight_smile:

I’ve noticed an interesting behavior of swap interval and glGerError. If the swap interval is set to N then the frame rate drops to around 1/N fps. Except if glGetError is invoked in the rendering loop because then everything works as it supposed to, frame rate is MonitorRefreshRate/N fps.

Here is a scoped down example, repetitively fade the background color to white and to black in 1-1 seconds.

  • If swap interval is 0 (or not set at all) then runs at very high fps as expected regardless if errors are checked or not.
  • If swap interval is N and errors are checked then runs at MonitorRefreshRate/N fps.
  • If swap interval is N and errors are NOT checked then runs at 1/N fps. Why?


#include <GLFW/glfw3.h>
#include <iostream>

int main()
    // GLFW: initialize and configure
    if (!glfwInit()) {
        std::cerr << "ERROR: Failed to initialize GLFW" << std::endl;
        return -1;

    // Create OpenGL context and window
    glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 4); // version is not particularly important
    glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 3); // same with e.g. 3.3
#ifdef __APPLE__
#endif // __APPLE__

    // GLFW window creation
    GLFWwindow* window = glfwCreateWindow(400, 400, "GLFW window", NULL, NULL);
    if (!window) {
        std::cerr << "ERROR: Failed to create GLFW window" << std::endl;
        return -1;

    // Enable V-Sync on every Nth refresh (Hz/N fps, 0=no synchronization)

    // render loop
    int frameIdx = 0;
    while (!glfwWindowShouldClose(window))
        std::cout << "Frame " << frameIdx++ << std::endl;
        float BW = abs(fmod((float)glfwGetTime(), 2.0f) - 1.0f);
        // render
        glClearColor(BW, BW, BW, 1.0f);

        // swap buffers and poll IO events (keys pressed/released, mouse moved etc.)

        // Why glGetError eliminate jittering when swap interval is set to non-zero?
        // Fraps (www.fraps.com) application also eliminate it without glGetError.
        int err = glGetError();
        if (err)
            std::cerr << "GL error " << err << std::endl;

    // terminate, clearing all previously allocated GLFW resources.
    return 0;

I suspect a driver error since this issue is present on a HP Elitebook 850 G5, Intel Core i5 8350U, Intel HD Graphics 620 (driver, newer is available but I’m not allowed update it), up-to-date Windows 10 x64, built in screen only with display scaling 125%.
The issue is not present on another HP Probook 4330s, Intel Core i5 2450M, AMD Radeon 7470M (driver 15.7.1, latest supported), up-to-date Windows 7 x64, built in screen only without display scaling, Aero.

I use latest Visual Studio 2017 (15.9.9), GLFW 3.2.1 on both systems and have tried x86/x64, debug/release versions too.

Interestingly if Fraps (www.fraps.com) is running then the issue doesn’t appear. :open_mouth:

Did anybody see this before? Is there any explanation or solution for it?



I’ve not seen this before, and have no idea what’s going on but it does sound like a driver issue - I take it you don’t have any errors being output, nor any when you check with something like CodeXL?



Hi dougbinks,

Of course, there is no error by glGetError nor GL_debug_output. I’ve updated the example to print out the frame index by the loop. It clearly shows the loop is spinning at the desired frequency of MonitorRefreshRate/N, however the screen is not updated accordingly.

I don’t see anything suspicious in Visual Studio performance profiler, logs are pretty much the same with or without glGetError.

I’ve never used CodeXL or similar tools before since I’m just learning. CodeXL says it requires an AMD GPU but the laptop in question has Intel 620. Anyway, I’ve tried it.
Just created a new project, started debugging and I got a warning about the non AMD GPU (of course). The fading is smooth, frame printout is spinning at rate Hz/N, no issue. Switching to profiler mode (GPU performance counter or application time line), issue is present, screen is updated once per second (swap interval 1), frame printout is spinning rate Hz/N. After quit from the profiled application CodeXL shows an error about “Unable to gather profile data”, probably due to the missing AMD GPU.

I’ve also tried RenderDoc (also the first time). When launch the application the fading is smooth, no issue, …

I guess CodeXL, RenderDoc, Fraps hook up something on rendering and/or buffer swapping that makes the driver and/or windowing system happy. Something similar unintended and unfortunate magic could be done by glGetError too.

I don’t consider this issue as a GLFW bug rather some unfortunate coincidence in the driver. I don’t mind to call glGetError by the rendering loop, but it’s odd. :frowning:



CodeXL can be pretty useful for debugging API calls even on non-AMD GPUs, though Renderdoc is much better for overall graphics debugging. For performance work on Intel try Intel’s GPA, though by the sounds it won’t show up the issue you have.



I’ve never seen anything like that before… Bizarre! I would also blame a driver bug, since there’s really not a lot that either your code or glfw can be doing wrong to cause the issue. Maybe try adding glFinish() somewhere in your loop to see if that makes a difference?



Bingo! glFinish cures the issue. (glFlush doesn’t.) Thanks for the help! :slight_smile:
I didn’t see glFinish (or glFlush) before in any tutorial that I’ve checked, usually they just simply swap the buffers.



Note that glFinish() will block until the commands are complete, thus serialising the CPU and GPU so it’s not something you should always do.

You might want to see if adding a GPUQuery helps, I added some code to NanoVG for that, though it’s currently commented out see the glGetQueryObjectui64v stuff.



Adding a GPUQuery helps. :slight_smile: I added these lines before the loop (using GLAD to get the functions):

GLuint queryId = 1;
glGenQueries(1, &queryId);

then start the loop with:

glBeginQuery(GL_TIME_ELAPSED, queryId);

and before swap buffers:

GLint available = 1;
glGetQueryObjectiv(queryId, GL_QUERY_RESULT_AVAILABLE, &available);
//if (available) {
//    GLuint64 timeElapsed = 0;
//    glGetQueryObjectui64v(queryId, GL_QUERY_RESULT, &timeElapsed);
//    std::cout << "Time elapsed: " << timeElapsed << std::endl;

Just query if the result is available is enough in my case, I don’t have to query the actual result. It might have the reasons that I don’t understand yet since according the documentation glGetQueryObject implicitly flushes the GL pipeline however glFlush() doesn’t have the same healing effect on the issue. (Does glGetError() also flush/wait for the pipeline? :thinking:)

Btw, since it’s suspected to be a driver issue that might be fixed in later versions we don’t have to dig too deep into this problem. You already helped a lot to understand a bit better what OpenGL does behind the scenes. Thanks for all of your help! :slight_smile:



As I understand now, glQueryObject*(…,GL_QUERY_RESULT_AVAILABLE,…) is a non-blocking call that tells the GPU some result is expected soon then returns with a boolean value whether the result is already available or not. In consequence the GPU may feel itself a little bit pushed to do the job and it does, even if the blocking glGetQueryObject*(…, GL_QUERY_RESULT, …) is not called eventually. At least it makes some sort of sense for me as the driver/GPU could behaves like this with an internal buffering of rendering commands in the GPU somehow. :open_mouth:

Sometimes I see flashes when the example updates the screen ~once per second. The flash looks like showing a color for a frame then switch to another color and don’t update the screen for the next ~1 second. It’s “OK” with the above theory as sometimes the GPU may burst the rendering right at the screen refresh. Also glGetError needs the rendering commands be processed to get the error. I’m not satisfied with this theory either but it weirdly explains something. Sort of. :frowning: