3

In a graphics API like Vulkan, you have VkFilter, which can be NEAREST or LINEAR. Leaving aside the mipmap filtering, which is another thing altogether, I'm trying to understand what the point of the min filter is in a single mip. I know what the mag filter does. When one texel in the image is spread over multiple pixels on the screen. It does bilinear filtering by taking four samples. The min filter is I'm assuming should be the opposite, ie., one pixel on the screen takes up multiple texels in the image. But I don't get what the min filter does in this case because the filtering for this kind of aliasing, ie., when one pixel on the screen contains multiple texels in the image, is a job for mipmapping, ie., selecting the appropriate mip. I don't understand what min filter is within the same mipmap level (disregarding mipmapping, which is the job the job of the separate MIP filter).

Does min filter also take four samples? Within the same MIP? Or does it just do nothing?

1
  • 1
    OpenGL Specification 8.14 Texture Minification. 8.14.2 Coordinate Wrapping and Texel Selection and 8.14.3 Mipmapping would be the most important parts. Commented Oct 21 at 0:19

2 Answers 2

5

You're thinking of filtering in terms of taking a whole texture and compacting it as a unit. But that's not what texture fetching is at the lowest level.

A texture fetch means that a fragment shader asks for the color at a particular location in texel space. When this happens, the system does some math with the neighboring fragments to get an idea of whether the shader is trying to scale the image up or down. But ultimately, the question being answered is to convert a location in the texture into a color value.

If the shader asks for the texture at position (0.1, 0.1), and the texture is 256x256, then that means asking for the texel at the texture space location (25.6, 25.6). But there is no texel there because in texel space, texels are at integers. So the system needs to compute an appropriate texel value.

And that's where filtering comes in. Minification and magnification determine what kind of filtering happens (in accord with the sampler parameters), but the system is still going to have to use that particular kind of filtering to compute texel at (25.6, 25.6).

If nearest filtering is used, then the coordinate is rounded to the nearest integer ( in this case, (27, 27)) and that's the color value that gets fetched. If linear filtering is used, the four texels in the range [(26, 26), (27, 27)] are fetched and a weighted blend is used between them based on the fraction for the texture coordinate.

Sign up to request clarification or add additional context in comments.

3 Comments

So it just simply does the exact thing as it would for the mag filter? OK. I see it as it a bit weird because... I don't think there's much use of doing that for min filter. Am I wrong about that?
Yes, you're wrong about that =P Pixel sample points rarely perfectly align with texel centers, so you get aliasing problems without linear filtering on minification. It looks terrible under motion because it shimmers as the sample location jumps hops from texel to texel from frame to frame. It's worst if you have no mips at all, but it's still an issue even with mipmaps.
Hmm, I guess it can help with aliasing right at the point where there are two texels per one screenspace pixel? But no further than that?
2

You don't need to have mipmaps. They're generally recommended, but they're not required. As such the min filter can use mitmaps, but it's not required.

In OpenGL there are 6 minifying functions you can choose[1]: Nearest, Linear, Nearest Mipmap Nearest, Linear Mipmap Nearest, Nearest Mipmap Linear, Linear Mipmap Linear. If you don't have mipmaps you have to set Nearest or Linear (we've probably all had the fun of trying to figure out why the program is crashing at a random GPU memory address only to realize that we either need to generate mipmaps or change the filter settings). Nearest and Linear do exactly as you'd expect: they sample the nearest, or average the nearest 4 pixels respectively. X Mipmap Nearest will choose the best mipmap and apply function X. X Mipmap Linear will choose the two nearest mipmaps and sample both by applying function X and producing a linear weighted average between the two.

There is also anisotropic filtering which uses some convoluted math to sample between multiple mipmap levels.

I haven't reviewed the other APIs documentation in this regard, but they almost certainly all offer similar or outright identical functionality.

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.