I'm linking binary data in my C program for ARM Cortex-M with GCC, like this:
arm-none-eabi-ld.exe -r -b binary -o html.o index.html
To work with the data I have these external variables:
extern const unsigned char _binary_index_html_start;
extern const unsigned char _binary_index_html_end;
extern const uint32_t _binary_index_html_size;
static const char* html = &_binary_index_html_start;
static const size_t html_len = &_binary_index_html_size;
What I don't understand is why do I need to get the address of the _binary_index_html_size variable to have the size value?
That would mean that the memory address (pointer) of the _binary_index_html_size variable represents the size value of the blob in bytes. When I debug this it seems to be correct, but to me it seems like a very strange solution to solve this.
Edit:
I guess the reason for this may be: because the size of the blob can never be bigger than the native data size (in my case 2^32), instead of wasting space and storing the size GCC just creates a variable that points to the memory address which represents the size of the blob. So the value is completely random and depends on other code (I tested this). This seems like a clever thing because the size does not occupy space and the pointer is resolved at compile time. So if one does not need the size, no space is wasted.
I think I will instead use (&_binary_index_html_end) - (&_binary_index_html_start), this seems better and is supported by all compilers.
static const size_t html_len = &_binary_index_html_size;looks like a bug to me. Who says you need it?_binary_index_html_sizecontains some random data, can change on recompile. But the address of this variable is exactly the size of the blob.