3

On Linux, with 16 GB of RAM, why would the following segfault:

#include <stdlib.h>

#define N 44000

int main(void) {
    long width = N*2 - 1;
    int * c = (int *) calloc(width*N, sizeof(int));
    c[N/2] = 1;
    return 0;
}

According to GDB the problem is from c[N/2] = 1 , but what is the reason?

5
  • Are you compiling/running on a 64 bit arcitecture ? Commented Sep 3, 2009 at 17:21
  • 2
    Even on linux 64, user limit settings and what-not can get into the way. I remember the company i worked for had 32GB ram machines, and i was only allowed to take 4GB for one user process. Commented Sep 3, 2009 at 17:26
  • But are you sure you're using the 64-bit compiler? If you print out sizeof(long), what is it? Commented Sep 3, 2009 at 17:29
  • @pavel: printf("%ld\n", sizeof(long)); = 8 Commented Sep 3, 2009 at 17:47
  • Have you possibly run firefox, movie players and openoffice in the background, too? Commented Sep 3, 2009 at 17:57

7 Answers 7

6

It's probably because the return value of calloc was NULL.

The amount of physical RAM in your box does not directly correlate to how much memory you can allocate with calloc/malloc/realloc. That is more directly determined by the remaining amount of Virtual Memory available to your process.

Sign up to request clarification or add additional context in comments.

10 Comments

Based on my calculations (assuming int is 4 bytes) you're asking the OS for ~14GB of contiguous space, which I could see failing in this case.
2.6GB, actually, but still enough to fail.
I got 14.4241GB in my calculations (assuming 4bytes int too).
Yup-- I screwed up the overflow. Bad coder, no coffee. Wait. More coffee.
wow I really can't read apparently - I read width*N as width. /bonk-self
|
6

Your calculation overflows the range of a 32-bit signed integer, which is what "long" may be. You should use size_t instead of long. This is guaranteed to be able to hold the size of the largest memory block that your system can allocate.

7 Comments

The largest memory block a system can allocate is never a round power of 2 number. Shared memory, library mappings, file mappings, runtime metadata, etc... will always occupy some subset of allocations.
Read my answer. I did not say "The system is able to allocate blocks of any size representable in a size_t". I said "size_t can represent any size block that the system is able to allocate." These are quite different statements.
In particular I am suggesting that his program is crashing not because he asked for too much RAM but because he inadvertently passed a negative number as the argument to calloc.
Regardless of wether there's shared libs occupying the space, the size will eventually be a power of 2. If he's on a 32 bit linux he will 1. overflow the argument to calloc 2. run out of address space - calloc will fail even if it didn't overflow
On 64-bit Linux (and all other Unices), GCC follows the LP64 model - that is, both pointers and long are 64-bit. See gcc.fyxm.net/summit/2003/Porting%20to%2064%20bit.pdf
|
4

You're allocating around 14-15 GB memory, and for whatever reason the allocator cannot give you that much at the moment- thus calloc returns NULL and you segfault as you're dereferencing a NULL pointer.

Check if calloc returns NULL.

That's assuming you're compiling a 64-bit program under a 64-bit Linux. If you're doing something else - you might overflow the calculation to the first argument to calloc if a long is not 64 bits on your system.

For example, try

#include    <stdlib.h>
#include    <stdio.h>

#define N    44000L

int main(void)
{
    size_t width = N * 2 - 1;
    printf("Longs are %lu bytes. About to allocate %lu bytes\n",
           sizeof(long), width * N * sizeof(int));
    int *c = calloc(width * N, sizeof(int));
    if (c == NULL) {
        perror("calloc");
        return 1;
    }
    c[N / 2] = 1;
    return 0;
}

1 Comment

it does return null, is there a way to get around that?
2

You are asking for 2.6 GB of RAM (no, you aren't -- you are asking for 14 GB on 64 bit... 2.6 GB overflowed cutoff calculation on 32 bit). Apparently, Linux's heap is utilized enough that calloc() can't allocate that much at once.

This works fine on Mac OS X (both 32 and 64 bit) -- but just barely (and would likely fail on a different system with a different dyld shared cache and frameworks).

And, of course, it should work dandy under 64 bit on any system (even the 32 bit version with the bad calculation worked, but only coincidentally).

One more detail; in a "real world app", the largest contiguous allocation will be vastly reduced as the complexity and/or running time of the application increases. The more of the heap that is used, the less contiguous space there is to allocate.

Comments

2

You might want to change the #define to:

#define N    44000L

just to make sure the math is being done at long resolution. You may be generating a negative number for the calloc.

Calloc may be failing and returning null which would cause the problem.

1 Comment

That won't make any difference unless int is 16 bit, which seems unlikely. 44000*2-1 fits in 32 bit int, then is assigned to a long "width", then width*N is calculated in long since that's the larger type, then calloc will use size_t internally.
1

Dollars to donuts calloc() returned NULL because it couldn't satisfy the request, so attempting to deference c caused the segfault. You should always check the result of *alloc() to make sure it isn't NULL.

Comments

1

Create a 14 GB file, and memory map it.

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.