I am working on a project using OpenGL, where the server needs to render two images: a high-quality image and a low-quality image. The goal is to compute the difference image between the two and send this difference to the client. The client then renders the low-quality image and combines it with the received difference image to reconstruct the high-quality image.
On the server side, I am calculating the difference between the two images in the fragment shader by directly subtracting them and outputting the result as a texture.
#version 330 core
out vec4 FragColor;
in vec2 TexCoord;
uniform sampler2D HighTexture;
uniform sampler2D LowTexture;
void main()
{
FragColor = texture(HighTexture, TexCoord) - texture(LowTexture, TexCoord);
}
then client side:
#version 330 core
out vec4 FragColor;
in vec2 TexCoord;
uniform sampler2D LowTexture;
uniform sampler2D DiffTexture;
void main()
{
FragColor = texture(LowTexture, TexCoord) + texture(DiffTexture, TexCoord);
}
However, I am facing an issue when the result of the subtraction is negative(for example: high image has shadow): OpenGL automatically truncates the negative value to zero, which leads to incorrect rendering on the client side.
Both images are of type RGB, and the texture format is GL_UNSIGNED_BYTE.
If I switch to using a floating-point texture format such as GL_RGBA32F, the size of the difference image becomes much larger, which goes against the goal of reducing the bit rate during transmission. Since the main purpose of sending the difference image is to reduce the amount of data transferred, using a floating-point texture would significantly increase the data size and nullify the advantage of compression.
What I need help with:
How can I handle negative values while still keeping the texture size small enough for efficient transmission?
How can I prevent OpenGL from automatically truncating negative values when calculating the difference between the two images?
Is there a way to preserve negative values in the subtraction result, or should I change the texture format to something that supports floating point values?
GL_UNSIGNED_BYTE, at best, that's the format of the data you are uploading to the texture. What is the internal format of the texture you create?GL_RGB8? Since you only deal with integer values in first place, why not render to a integer texture at all? There are 8 and 16 bit integer formats that both support negative numbers.