Been working on a little hobby engine for the past year or so which uses GLFW, and have a question about using multiple threads for loading. For example, my engine uses classes of Scene objects which contain all of the data for objects as well as a GPU class which is responsible for managing all gl* functions and tracking what’s currently loaded into the gpu.
I recently sought out to implement a feature that would allow one Scene to continue to play while loading another Scene in a separate thread. After implementing this I quickly hit an Exception due to one of the gl calls not returning a valid int id. I was able to reason out that this was more than likely due to the opengl calls being made from a thread other than the main thread where the context was initialized.
So I suppose my question is then: How can I allow my current scene to continue to be responsive while creating a new scene? (Think a loading screen that shows a progress bar, connection status, etc while the actual level loads)
I suppose I could refactor some things so that I can load everything into memory in the separate thread and then once that’s complete, make the opengl calls to load the buffers using the preloaded data. My only concern with that would be the amount of time it takes to saturate the opengl buffers, as the users screen would be unresponsive for this duration.
Just so we are on same page - this has nothing to do with GLFW, it just a general way how OpenGL works. GL requires context to be available in thread you are calling GL functions from.
One way to fix this is to create multiple GL contexts and make them available in separate threads. When you create GL context you can make it to share its resources (textures, buffers, …) with other context - so you make loader thread GL context to be shared with your main GL context. Then everything will “work”.
But - I would strongly recommend against this. Multiple GL contexts are well known source of many bugs depending on GPU driver version & OS.
Another way to minimize time spent in GL is to map buffers in main GL thread and pass pointer to loader thread. Basically do this:
in main thread allocate buffer & map it to get pointer - glGenBuffer, glBindBuffer, glBufferData (with NULL data), glMapBuffer
pass pointer to loader thread
wait until loader thread finishes writing its data to this pointer
now back in the main thread call glUnmapBuffer and you’re ready to use buffer
for textures this would require extra copy from PBO buffer to texture - but that is async call, should be quick
If you use a bit more modern GL versions that has persistent buffers (GL_ARB_buffer_storage extension or GL 4.4 version & up) then you can map buffers persistently and skip step 4. All you need is to map buffer at startup, and leave it mapped. Just finish writing to it before using. You can even have just one huge buffer for any data you want to sent to GPU - vertex buffer, index buffer, etc - your code can which parts in this buffer means what data, all you need is to calculate correct offsets in pointers to data in these buffers (pointer argument in glVertexPointer or glDrawElements functions).
Can confirm, this works wonders! I was able to create a separate GLFWusercontext to use in the thread that’s loading a new scene, then once that thread has finished, set the main thread context to the new user context. Thank you again for mentioning this!
I’m considering renaming UserContextto GLAuxContext before submitting the PR, so be aware you might need to change the function names when this arrives. I plan to try to submit the PR sometime this month.