2

Is there a way to detect when an array (or any variable) will be larger than the amount of free memory in the system? Secondarily, if such an variable was declared, what would happen?

Some background: I'm on an embedded ARM device with 20KB of RAM; it is not running an RTOS. I'm trying to read sensor data out into arrays, but there's a significant amount of sensor data that can be read. If I was allocating the memory with malloc, I could check that the return != NULL to see if enough space on the heap was available (though this question seems to indicate that might be optimistic). However, I'm not sure what happens when an array that is larger than the available memory is declared.

The obvious solution is to do what the linked question's accepted answer states: allocate all of the memory for the data upfront. But if declaring arrays dynamically, how could one tell if the system was out of memory, and what would happen if one didn't?

edit: Some examples, to illustrate what I'm referring to. What would happen if one defined an array like so:

void breaking_things(){
    uin8_t contrived_example[30000] = {0};
}

This isn't possible on my system where I only have 20KB of free space, but what will happen.

As another example:

void breaking_things(){
   uin8_t contrived_example0[7000] = {0};
   uin8_t contrived_example1[7000] = {0};
   uin8_t contrived_example2[7000] = {0};
}
12
  • 1
    Can you clarify how you are allocating the array? It makes a difference whether the array is allocated with malloc (which seems not to be the case), allocated as static memory or allocated as stack memory. A simple code example may help. Commented Nov 20, 2019 at 21:53
  • 1
    It's generally bad form to use dynamic allocation in a small embedded system. Figure out how much memory you have in the device, allocate it all to a big static array, and take what you need out of that (perhaps with something like a simple mark-release allocator). Commented Nov 20, 2019 at 21:57
  • 4
    If you define the array as global one, then the linker will scream at you that your BSS section is overlapping with something else. Assuming a proper linker script of course... If it is local (on stack) then... well it's a bad idea. Commented Nov 20, 2019 at 21:57
  • 4
    As I said, the most practical way is to define a static (global) array and let linker worry about the space. Your examples are overflowing the stack, which is mostly not detectable (until it is too late). Commented Nov 20, 2019 at 22:07
  • 2
    This path is fraught with danger in my opinion. There are different ways to check the remaining stack size but they are either platform dependent or quite fragile. I personally wouldn't go there. But for example: Is it possible to predict a stack overflow in C on Linux? Commented Nov 20, 2019 at 22:11

3 Answers 3

3

As far as the C language is concerned, attempting to allocate a local or global variable that doesn't fit in memory has undefined behavior, which means that anything can happen.

In practice, depending on your toolchain, your MMU/MPU setup if any, and on the exact memory layout, the consequence may either be that writing outside the memory area reserved for the stack overwrite whatever that location in memory contains, leading to “fun” results, or some kind of memory-related fault. You definitely do not want to overwrite other memory, and memory faults are hard to recover from, so you should make sure that this won't happen.

Major embedded toolchains have a way to compute the maximum stack usage of a program. More precisely, they compute a worst-case approximation which tends to be good enough for the kind of programs you'd typically run on a small embedded system. This only works if the program doesn't use dynamic features such as function pointers, recursion, variable length arrays and alloca, so don't do that.

For example, with GCC, the compiler can tell you the stack usage of each function, and combined with the control flow graph you can determine an upper bound for the stack usage of the program. See How to determine maximum stack usage in embedded system with gcc?

See also Stack Size Estimation which mentions some tools that work separately from the compiler.

Remember to take interrupt routines into account in addition to main, if applicable.

If you make the variable permanent (static storage duration), i.e. if it's a global variable or a local variable with the static qualifier, your linker should tell you if you run out of BSS. It's more straightforward, but you lose the ability to use that memory for a different purpose during part of the program (e.g. during boot). You can regain some of this ability by making the global variable a union, but it's dangerous because the compiler won't protect you against using the wrong member of the union at the wrong time.

Sign up to request clarification or add additional context in comments.

11 Comments

Function pointers and interrupts are pretty much standard in all embedded systems, so the mentioned toolchain features for calculating stack usage aren't very useful or reliable.
And using union for the sake of saving memory is plain bad advise - that very practice is banned by MISRA-C for example. There's lots of better ways to optimize for memory consumption, most notably micro-managing the size of integer variables.
@Lundin Function pointers for anything other than interrupt handlers? I've worked on systems that made effective use of build-time stack usage analysis, and forbade the use of recursion and function pointers (except if needed to register an exception handler at boot time) primarily so that this analysis would work. You just need to take all entry points into account.
In addition to interrupts/vector table, they are also used for callbacks, state machines, bootloaders and so on. Yeah they potentially do screw up static analysis of stack use, but so do interrupts.
@M.M Hmmm, I thought that this would hit an implementation limit, but after checking I can't find such a limit. That would be a pretty bad omission in the standard though. Is the program #include <stdint.h> int main(void) { static char a[SIZE_MAX]; char b[SIZE_MAX]; return 0; } really strictly conforming? (P.S. asking for a citation for a reasonable-looking informal statement comes off as rather passive-aggressive. If you've concluded that the behavior is in fact defined, which would be weird since pretty much every implementation rejects it, please share your wisdom instead of hoarding it.)
|
1

gcc offers the flag -fstack-usage and -Wstack-usage which will output stack usage for each function and warn about excessive , this can be a starting point in looking for functions which risk a stack overflow. But won't help you with function call depths that overflow the stack in many small chunks.

One possible approach is to work out the address of the end of the stack on your hardware so you can have a debug macro that will tell you how much stack is remaining. Of course you can't close the gate after the horse has bolted -- if you have a function that needs to use a lot of stack then you'll need to do the stack check before calling that function; not at the start of the function (typically the stack is consumed on function entry, not on execution reaching the declaration of the large buffer).

Ideally you'd design your code in such a way that all possible code paths are known and you can get your head around it. clang has the ability to generate a call graph showing all functions that call each other -- if your code is a mess then this looks like the cat got into the wool basket, but if not then you can associate stack usage with each function and work out the theoretical maximum possible stack usage for any code path.

There are probably commercial tools that do all this stuff automatically although IDK what they are.

3 Comments

Compiler options and call graphs aren't very useful, because these kind of systems have lots of interrupts, making it impossible to determine stack usage statically. It must be examined in run-time.
@Lundin Interrupts should not use any significant amount of stack (to be portable they should do nothing besides setting an atomic flag)
It's not the amount that matters, it is the increased stack peak usage on top of what you already thought was the maximum use. Stack overflow because the programmer failed to consider the worst-case interrupt scenario is a well-known problem in embedded systems.
1

The question is a bit confused because you should never allocate that large arrays on the stack, particularly not in an embedded system. In case you declare a local array with size 30kb on a 20kib system, you will simply kill the stack with a stack overflow in run-time.

You can only protect yourself from stack overflow with programmer knowledge and code review, though a few tool chains provide means to measure stack usage and some MCUs will give meaningful errors such as software interrupt/exception upon stack overflow. There's also a manual way to test stack use from a debugger, by filling the whole stack with some nonsense value like 0xAA, then execute the program with maximum code coverage, then analyse the memory map to see how far down in the stack you can still find 0xAA.

But if declaring arrays dynamically, how could one tell if the system was out of memory

By checking the result of malloc. But this is a non-issue in your case, since you should never use dynamic memory on a 20kib bare metal system. Because it doesn't make any sense to do so.

What you should do is to declare that array with static storage duration, by making it static and/or by moving it to file scope. In that case, you will get linker errors if you use too much memory. The linker will whine "out of memory in section .bss" or similar.

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.