11

I have two different systems one with OpenGL 1.4 and one with 3. My program uses Shaders which are part of OpenGL 3 and are only supported as ARB extension in the 1.4 implementation.

Since I cant use the OpenGL 3 functions with OpenGL 1.4 is there a way to support both OpenGL versions without writing the same OpenGL code twice (ARB/EXT and v3)?

3 Answers 3

10

Unless you really have to support 10 year old graphics cards for some reason, I strongly recommend targetting OpenGL 2.0 instead of 1.4 (in fact, I'd even go as far as targetting version 2.1).

Since using "shaders that are core in 3.0" necessarily means that the graphics card must be capable of at least some version of GLSL, this rules out any hardware that is not capable of providing at least OpenGL 2.0. Which means that if someone has OpenGL 1.4 and can run your shaders, he is using 8-10 year old drivers. There is little to gain (apart from a support nightmare) from that.

Targetting OpenGL 2.1 is reasonable, there are hardly any systems nowadays which don't support that (even assuming a minimum of OpenGL 3.2 may be an entirely reasonable choice).

The market price for an entry level OpenGL 3.3 compatible card with roughly 1000x the processing power of a high end OpenGL 1.4 card was around $25 some two years ago. If you ever intend to sell your application, you have to ask yourself whether someone who cannot afford (or does not want to afford) this would be someone you'd reasonably expect to pay for your software.

Having said that, supporting OpenGL 2.x and OpenGL >3.1 at the same time is a nightmare, because there are non-trivial changes in the shading language which go far beyond #define in varying and which will bite you regularly.

Therefore, I have personally chosen to never again target anything lower than version 3.2 with instanced arrays and shader objects. This works with all hardware that can be reasonably expected having the processing power to run a modern application, and it includes the users who were too lazy to upgrade their driver to 3.3, providing the same features in a single code path. OpenGL 4.x features are loadable as extension if available, which is fine.
But, of course, everybody has to decide for himself/herself which shoe fits best.

Enough of my blah blah, back to the actual question:
About not duplicating code for extensions/core, you can in many cases use the same names, function pointers, and constants. However, be warned: As a blanket statement, this is illegal, undefined, and dangerous.
In practice, most (not all!) extensions are identical to the respective core functionality, and work just the same. But how to know which ones you can use and which ones will eat your cat? Look at gl.spec -- a function which has an alias entry is identical and indistinguishable from its alias. You can safely use these interchangeably.
Extensions which are problematic often have an explanatory comment somewhere as well (such as "This is not an alias of PrimitiveRestartIndexNV, since it sets server instead of client state."), but do not rely on these, rely on the alias field.

Sign up to request clarification or add additional context in comments.

5 Comments

Is there a guarantee that the ARB alias is defined if the corresponding core functionality exists?
@josefx: As a blanket statement: no. There is no guarantee that if an implementation supports e.g. core 3.0 that it also supports the EXT_ and ARB_ extensions that have been made core. Though most of the time they're of course supported (it's silly not to, really), I recall it happening once that I wanted to use an extension that suddenly was "gone" on my system because it was core. It "magically reappeared" with the next driverrelease (presumably because people complained?). You do have at least a formal guarantee (driver bugs exempted) that if it says "Alias" in the spec file...
... that you can use one function like the other, though. Of course you are never safe from a buggy implementation, but there's nothing you can do against that anyway. Extensions are generally a complicated matter. For example, all ARB extensions have the ARB_ prefix. But some animals are more equal than others. The "backtensions" how I call them (the ones that implement 3.x functionality on 2.x contexts and 4.x functionality on 3.x contexts) do not follow that naming scheme. Which, frankly, makes the programmer's life needlessly complicated ...
... at least, extensions that are not "normal" extensions but "backwards" ones could be called BRA_ instead of ARB_ (or something similar) so you get the very valuable hint from the name that things are not quite as you expect them to be. Instead, you must either "know by divination", or browse through the appendix of the specification, or parse the spec file, neither of which is the best possible way. It's particularly nasty if you use the individual specs (the little text files) with your own fp-generator, because it's sheer impossible to tell from them in an automated way.
You can quickly look up which Graphics Cards support which OpenGL version/GLSL version at delphigl.de/glcapsviewer/listreports2.php?groupby=version. Most of the used cards can be found there
5

Like @Nicol Bolas already told you, it's inevitable to create two codepaths for OpenGL-3 core and OpenGL-2. OpenGL-3 core deliberately breaks with compatibility. However stakes are not as bad as it might seem, because most of the time the code will differ only in nuances and both codepaths can be written in a single source file and using methods of conditional compilation.

For example

#ifdef OPENGL3_CORE
    glVertexAttribPointer(Attribute::Index[Position], 3, GL_FLOAT, GL_FALSE, attribute.position.stride(), attribute.position.data());
    glVertexAttribPointer(Attribute::Index[Normal], 3, GL_FLOAT, GL_FALSE, attribute.position.stride(), attribute.position.data());

#else
    glVertexPointer(3, GL_FLOAT, attribute.position.stride(), attribute.position.data());
    glNormalPointer(GL_FLOAT, attribute.normal.stride(), attribute.normal.data());
#endif

GLSL shaders can be reused similarily. Use of macros to change orrucances of predefined, but depreceated identifiers or introducing later version keywords e.g.

#ifdef USE_CORE
#define gl_Position position
#else
#define in varying
#define out varying
#define inout varying

vec4f gl_Position;
#endif

Usually you will have a set of standard headers in your program's shader management code to build the final source, passed to OpenGL, of course again depending on the used codepath.

3 Comments

I prefer to one binary supporting as many systems as possible so I'm going to go with Damons answer for the c++ code and yours for the shaders.
@josefx: Two codepaths doesn't imply two binaries! You can link both paths into a single binary, in which only the entry point makes the switch.
I did not think of that. Sometimes I hate it that stackoverflow only allows one accepted answer per question :(.
3

It depends: do you want to use OpenGL 3.x functionality? Not merely use the API, but use the actual hardware features behind that API.

If not, then you can just write against GL 1.4 and rely on the compatibility profile. If you do, then you will need separate codepaths for the different levels of hardware you intend to support. This is standard, just for supporting different levels of hardware functionality.

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.