Compiler: Microsoft C++ 2005
Hardware: AMD 64-bit (16 GB)
Sequential, read-only access from an 18GB file is committed with the following timing, file access, and file structure characteristics:
18,184,359,164 (file length)
11,240,476,672 (ntfs compressed file length)
Time File Method Disk 14:33? compressed fstream fixed disk 14:06 normal fstream fixed disk 12:22 normal winapi fixed disk 11:47 compressed winapi fixed disk 11:29 compressed fstream ram disk 10:37 compressed winapi ram disk 7:18 compressed 7z stored decompression to ntfs 12gb ram disk 6:37 normal copy to same volume fixed disk
The fstream constructor and access:
define BUFFERSIZE 524288
unsigned int mbytes = BUFFERSIZE;
char * databuffer0; databuffer0 = (char*) malloc (mbytes);
datafile.open("drv:/file.ext", ios::in | ios::binary );
datafile.read (databuffer0, mbytes);
The winapi constructor and access:
define BUFFERSIZE 524288
unsigned int mbytes = BUFFERSIZE;
const TCHAR* const filex = _T("drv:/file.ext");
char ReadBuffer[BUFFERSIZE] = {0};
hFile = CreateFile(filex, GENERIC_READ, FILE_SHARE_READ, NULL, OPEN_EXISTING, FILE_ATTRIBUTE_NORMAL, NULL);
if( FALSE == ReadFile(hFile, ReadBuffer, BUFFERSIZE-1, &dwBytesRead, NULL))
{ ...
For the fstream method, -> 16MB buffer sizes do not decrease processing time. All buffer sizes beyond .5MB fail for the winapi method. What methods would optimize this implementation versus processing time?