A reliable approach is to use std::memcpy to copy the target bytes over a uint32_t object. memcpy is required to safely accomplish this for every version of C++. This pattern is common enough that compilers can usually optimize out the copy.
#include <array>
#include <cstddef>
#include <cstdint>
#include <cstring>
#include <stdexcept>
template<class T, std::size_t N>
T read_int_from_bytes(const std::array<std::byte, N> & data, std::size_t index)
{
if(index + sizeof(T) > N) {
throw std::invalid_argument("read_int index out of bounds");
}
// Integer to copy the bytes to
T result;
// Copy the bytes
std::memcpy(&result, &(data[index]), sizeof(result));
return result;
}
Here is an example. This test creates an array of bytes with values { 0x00, 0x10, ..., 0xB0 } and reads a uint32_t starting at the 4th byte.
#include <iostream>
#include <iomanip>
int main()
{
std::array<std::byte, 12> data{};
for(std::size_t i = 0; i < data.size(); ++i)
{
data[i] = static_cast<std::byte>(0x10 * i);
}
std::cout << "Ox" << std::hex << read_int_from_bytes<std::uint32_t>(data, 4);
}
The test produces Ox70605040 when I try it here. You can also notice from the assembly that the entire function call is optimized out and the result is precalculated, clearly showing that the compiler was able to reason through the memcpy and remove it entirely.
Beware the the results are unspecified, it depends on the Endianness of the target platform. That is, whether the first byte is the most significant or the least significant. For many applications this doesn't matter, but if it does C++20 introduced std::endian which you can use to check the system's Endianness.
std::memcpy(&result, &ar[4], sizeof(uint32_t)), whereresultis auint32_twould work fine and give the desired result?value = ar[0] + (ar[1] << 8) + (ar[2] << 16) + (ar[3] << 24)in a utility function. That way, you decide if data is stored in big or little endian and the same code would works in both case.