It would most likely not be faster and it could even be incorrect.
(1) likely not correct:
If you sum a billion integers as integers, you will likely run into the problem of integer overflow. Your result will be nonsense if that happens. Might also be that some part of your stack notices this happening and throws an error instead. Failure either way. Casting the input numbers to Double avoids this problem, because double-precision floating-point-numbers have enormous range.
(2) likely not faster:
Latency Numbers every programmer should know has a random read on an SSD as a hundred thousand times slower than a typical instruction. So your problem is totally dominated by disk, the tiny amount of compute that your approach may save is irrelevant.
The question of latency vs throughput came up in the comments, and throughput is the more important number here. The difference is smaller here, but still, the CPU outpaces the Disk by a lot. A really fast SSD may read at 7 GB/s. Under ideal conditions, if your table contains only the numbers you want to average and no book-keeping information, other columns, ignored rows, free space, that would mean your 2 billion int32 will be read in a bit more than a second, but a real database will be much, much slower. On the CPU side, 2 billion instructions may run in one second on an ARM Cortex-A8 from 2005.