I'm trying to implement a function that takes a ray as its start and end position and returns all intersections with the map grid. I wrote following code and got a problem. Function returns correct intersections only when player's position is integer value. So, I need to make an offset depending on player position in tile. I tried to implement this with my poor linear algebra understanding, but every solution was incorrect and show unexplainable behavior, so now I seek for your help
Code:
std::vector<vec2> Render::getRayIntersections(float x1, float y1, float x2, float y2, int step)
{
std::vector<vec2> points;
float dx = x2 - x1;
float dy = y2 - y1;
int steps = std::max(std::abs(dx), std::abs(dy));
float xInc = static_cast<float>(dx) / steps;
float yInc = static_cast<float>(dy) / steps;
float x = static_cast<float>(x1);
float y = static_cast<float>(y1);
for (int i = 0; i <= steps / step; ++i) {
points.emplace_back(static_cast<int>(x), static_cast<int>(y));
x += xInc * step;
y += yInc * step;
}
return points;
}
UPD: step is just a multiplier to apply this algorithm to the 2D preview where each tile has the size equal to step. In current case each tile is 16 pixels, so step = 16
As you can see, when player is on a grid center, so its position is integer, algorithm works. But when player is inside the tile, you can see that intersection points just follow the player without any offset and I get wrong result.
