I’ve actually done more work on this and I now have a relatively simple example that I hope will help clarify this discussion. My sample code is as follows:
#include <GLFW/glfw3.h>
#include <stdlib.h>
#include <stdio.h>
#define INCLUDE_SWAPINTERVAL_LINE 0
bool lock_to_vsync = false;
static void error_callback(int error, const char* description)
{
fputs(description, stderr);
}
static void key_callback(GLFWwindow* window, int key, int scancode, int action, int mods)
{
if (key == GLFW_KEY_ESCAPE && action == GLFW_PRESS)
glfwSetWindowShouldClose(window, GL_TRUE);
if (key == GLFW_KEY_V && action == GLFW_PRESS)
{
lock_to_vsync = !lock_to_vsync;
}
}
int main(void)
{
GLFWwindow* window;
if (!glfwInit())
exit(EXIT_FAILURE);
window = glfwCreateWindow(640, 480, "Simple example", NULL, NULL);
if (!window)
{
glfwTerminate();
exit(EXIT_FAILURE);
}
glfwSetWindowPos(window, -1000, 1000);
glfwSetErrorCallback(error_callback);
glfwSetKeyCallback(window, key_callback);
while (!glfwWindowShouldClose(window))
{
glfwMakeContextCurrent(window);
#if INCLUDE_SWAPINTERVAL_LINE
glfwSwapInterval(lock_to_vsync);
#endif
float ratio;
int width, height;
glfwGetFramebufferSize(window, &width, &height);
ratio = width / (float) height;
glViewport(0, 0, width, height);
glClear(GL_COLOR_BUFFER_BIT);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(-ratio, ratio, -1.f, 1.f, 1.f, -1.f);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glRotatef((float) glfwGetTime() * 50.f, 0.f, 0.f, 1.f);
glBegin(GL_TRIANGLES);
glColor3f(1.f, 0.f, 0.f);
glVertex3f(-0.6f, -0.4f, 0.f);
glColor3f(0.f, 1.f, 0.f);
glVertex3f(0.6f, -0.4f, 0.f);
glColor3f(0.f, 0.f, 1.f);
glVertex3f(0.f, 0.6f, 0.f);
glEnd();
glfwSwapBuffers(window);
glfwPollEvents();
}
glfwDestroyWindow(window);
glfwTerminate();
exit(EXIT_SUCCESS);
}
I’m running on Windows 10 with an Nvidia GTX 1080 GPU. I have two monitors: A main monitor with a refresh rate of 50 Hz and a secondary monitor with a refresh rate of 60 Hz. I’m using Fraps to diagnose the frame rate of my rendered window. My goal is to create a ‘primary window’ on a faster, secondary monitor and render to it at that monitor’s refresh rate. Here’s what I’m running into:
'1. I would think, that if I wanted to achieve my objective, I would, in the above code, include the glfwSwapInterval(lock_to_vsync) line (by setting the #define to 1) while also setting lock_to_vsync = true;
Result: If I do this, my primary window (even though it is located squarely in my secondary monitor) updates at 50 Hz (i.e. the refresh rate of my main monitor).
'1a. And, btw, if I toggle the lock_to_vsync boolean by hitting the ‘v’ key, the frame rate will jump to some ridiculously high value (like 5900 frames a second).
'2. However, if I completely eliminate the glfwSwapInterval line from the compilation (by setting the INCLUDE_SWAPINTERVAL_LINE #define to 0), the primary window does what I expect/want it to do: If I locate the primary window on the secondary monitor, it updates at 60 Hz and if I move it back to the main monitor it updates at 50 Hz.
So the success of case #2 argues for not using the glfwSwapInterval() command at all when the primary window’s context is ‘current’. This gives me the desired behavior (if I am only rendering to one window (see my post-script for what to do w/ two windows)). I think this is the ‘solution’ <= it’s just a little (actually more than a little) disconcerting that by using the glfwSwapInterval(1); command it locks only onto the vsync of the main monitor and not the vsync of the monitor on which it is located. Is there any command that allows you to control which vsync a particular rendered window locks to?