3

I'm using 64bit mingw to compile c code on windows x64. I'm using fwrite to create binary files from memory array. I want to write ~20Gb calling this function but it just write until 1.4~1.5gb and then it stops writting (without crashing, just hangs there.... doing nothing). Is there any solution? Right now I'm writing 20 files and then I merge them. Opening the file as 'ab' works but I cant read the file properly if I use that mode.

Sample (pseudo)code:

    short* dst= malloc(20GB);
    *calculations to fill dst* 
    file=fopen("myfile",'wb');
    fwrite(dst, sizeof(short), 20GB/sizeof(short), file);
    fclose(file)

That program never ends and file size is never grater than 1.5GB

3
  • 8
    Please show your code. Commented Sep 4, 2015 at 0:10
  • I've wrote a pseudocode. Tell me if it helped. Commented Sep 4, 2015 at 1:16
  • Have you tried attaching a debugger and inspected the call stack? Commented Sep 4, 2015 at 1:34

3 Answers 3

6

Write it in smaller chunks. For heaven's sake, don't try to malloc 20gb.

Sign up to request clarification or add additional context in comments.

6 Comments

Mallocing 20GB today is equivalent to mallocing 20MB in like, 1993.
I wonder how long would take to allocate 20gb, if it's even possible
@dreamlax: It's like I wish I could fly across the country, but I can't find a plane that long.
May I ask why? Both suggestions, "smaller chunks" and don't alloc 20 gb. How should I write such big files? I guess I could do some producer/consumer to avoid that big allocation, but It's much easier to code it this way.
@papanoel87: There's no point arguing that it's easier to write code that way if it doesn't work ;-) what happens if your program suddenly needs to work with more than 20GB of data? The producer/consumer approach will not only improve scalability, but efficiency as well. Consider how Windows copies a 20GB file from one location to another, do you think it loads the entire file at once?
|
1

Depending on the environment (operating system, memory model, file system), it might not be possible to create a file greater than 2 GB. This is especially true with MSDOS file systems and of course could be true on any file system if there is insufficient disk space or allocation quota.

If you show your code, we could see if there is any intrinsic flaw in the algorithm and suggest alternatives.

1 Comment

I'm using windows 7 x64, I've got 32gb of ram and i'm under NTFS. I'm able to create big files, I can do that with 'ab' mode or merging several binary files.
-1

Mingw is a 32 bit environment, there AFAIK does not exist a 64 bit variant.

It may be that fwrite() from mingw is unable to deal with more than 2 GB or 4GB unless mingw is large file aware.

If you can find something similar to truss(1), run your progran under this debugging tool. With the information you provided, it is not possible to give a better advise.

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.