Questions tagged [gpu]
GPU (graphics processing unit), is a specialized processor designed to accelerate the process of building images.
183 questions
0
votes
2
answers
116
views
Do GPUs re-draw a all of a character's vertices/triangles/fragments every frame?
I'm a beginner in game programming and I want someone to confirm my understanding.
Let's say there's a 3D model for a character in the game. This character is made up of triangles. Each vertex in ...
0
votes
2
answers
311
views
How can I efficiently render lots of moving objects in a game?
I'm using OpenGL but this question should apply generally to rendering.
I understand that for efficient rendering in games, you want to minimize communication between the CPU and GPU. This means pre-...
1
vote
0
answers
867
views
Completely independent dual GPU setup for VR with 100% "SLI efficiency"?
I have a simple (and maybe quite naive) question regarding dual GPU use for Virtual Reality (VR); it nagged me for years now and I couldn't figure out why this can't work, at least in principle:
I ...
0
votes
1
answer
2k
views
Int vs Float, which one is faster for gpu?
My game need to loop through massive amount of data, and the amount of data can increase by a lot depending on world settings set by player. The data is too big for CPU so i need to use GPU for it ...
1
vote
1
answer
408
views
Computations in GPU Unity
I've made a fluid simulation using particles in Unity, but now it is painfully slow because all computations are done using the CPU. In order to make it faster, I need to do computations on the GPU, ...
0
votes
0
answers
1k
views
AsyncGPUReadback.RequestIntoNativeArray - owner has been invalidated
I have the following C# code in Unity version 2022.2.0a12:
...
0
votes
0
answers
235
views
Is it possible to use hardware acceleration in OpenCL
I built a small game engine using OpenCL and SDL2 but it's not running really fast compare to Vulkan and OpenGL. I wrote rasterization code, but when I did some research Vulkan and OpenGL use hardware ...
1
vote
1
answer
754
views
GPU Instanced Transparent Mesh Not Rendering
I'm trying to render a bunch of clouds via gpu instancing. This works perfectly fine with the default HDRP Lit Shader, and I get the following result:
However, as soon as I change the surface type ...
0
votes
1
answer
324
views
How to temporarily set additional system environment variable only in 'play' mode inside godot editor?
I'm learning godot with a laptop that has AMD discrete GPU. My OS is Arch Linux so if I want to use discrete GPU I have to set system environment variable ...
0
votes
0
answers
39
views
Not clearing FBO's Texture error in battery economy mode
When rendering inside a FBO's texture, I'm not using glClear() but overwriting each fragment, GL_BLEND is set to true.
This works just fine, but I just realised when my laptop switch to economy mode, ...
0
votes
0
answers
2k
views
SDL2 for hardware accelerated graphics?
I am attempting to make a 3d game using SDL2 just to learn and have a bit of fun. I was wondering if there is any way to get SDL2 do calculations on GPU. I have read that SDL2 Textures uses GPU for ...
0
votes
1
answer
282
views
Why was 24-bit color support introduced twice in GPUs?
I was doing research, trying to answer the question "Which was the first GPU to support 24-bit color". I know all color since 1992 is 24-bit, even in games like Doom. I mean 16 million ...
0
votes
0
answers
68
views
How is data written from two different gpu cores to the same memory?
Does each core’s data get written to the shared memory one at a time or both at the same time? For instance, when two cores that are next to each other need to write to the same memory spot does the ...
0
votes
1
answer
240
views
What is the difference between these two shaders in terms of performance?
I have implemented a two pass Gaussian blur shader in GLSL like this:
...
3
votes
1
answer
1k
views
How to render a grid of dots being exactly 1x1 pixel wide using a shader?
I would like to render a grid with many dots, all equally spaced and being exactly 1 pixel wide.
What I was able to achieve so far is this :
What I would like is (simulated using image editing ...
1
vote
1
answer
346
views
Understanding buffer swapping in more detail
This is more a theoretical question. This is what I understand regarding buffer swapping and vsync:
I - When vsync is off, whenever the developer swap the front/back buffers, the buffer that the GPU ...
3
votes
2
answers
2k
views
Why isn't more culling being done on the GPU?
I'm interested in game development and 3D graphics; however, I'm not very experienced, so I apologize in advance if this comes across as ignorant or overly general.
The impression I get is that quite ...
4
votes
3
answers
4k
views
Should calculations be done on the CPU or GPU?
I'm currently learning OpenGL and it's become obvious that I can do most calculations on the CPU or the GPU. For example, I can do something like lightColor * objectColor on the CPU, then send it to ...
1
vote
1
answer
597
views
reading from texture2d resource in directx11
Hi^^ i am trying to read data in resource, which i used to do well without any problems. but suddenly it is not working.
first i made immutable resource that has data in it, which is
...
1
vote
0
answers
337
views
How to render decoded video that is on GPU memory without copying to CPU
I'm reading this example of ffmpeg hardware decoding: https://github.com/FFmpeg/FFmpeg/blob/release/4.2/doc/examples/hw_decode.c
At line 109 it does this:
...
0
votes
1
answer
284
views
OpenGL Texture Zig-Zag Artifacts Over Time
I'm working on a deferred shading renderer in OpenGL, where I write all geometry output to Colour, Normal and Depth textures and then apply lighting effects later.
Everything seems fine except once ...
1
vote
1
answer
181
views
GPU (render time) increase if screen size increase
i create a simple 2d scene in unity 2017.3.1f1
I changed the size (height and width) in the Game View and proflie to see how it affects the rendering.. (below photo)
I saw that the rendering time ...
6
votes
0
answers
92
views
How can I make sure my OpenCL code works correctly on different graphic cards?
I'm testing on adding some OpenCL code in my game, but I only have a single Nvidia card & I'm not sure the code will run normally on other platforms.
Is there any way to make sure my code runs ...
1
vote
1
answer
663
views
Why do GPUs have limited amount of allocations?
I've been learning Vulkan lately and I read that you can allocate VRAM memory only set amount of times and it doesn't matter if it's 2gb or 2kb, why is it?
I'm specifically referring to ...
1
vote
1
answer
692
views
Does texture splatting always sample 4 x N textures per fragment (regardless of the weights)?
Texture splatting - usually - is done by vertex painting, where each channel R-G-B-A is assigned as a different texture weight.
Due to the way shaders are executed, doesn't it mean that the fragment ...