0

I'm trying to load ~2GB of text files (approx 35K files) in my python script. I'm getting a memory error around a third of the way through on page.read(). I'

for f in files:
    page = open(f)
    pageContent = page.read().replace('\n', '')
    page.close()

    cFile_list.append(pageContent)

I've never dealt with objects or processes of this size in python. I checked some of other Python MemoryError related threads but I couldn't get anything to fix my scenario. Hopefully there is something out there that can help me out.

4
  • 3
    You'll want to read the input in chunks. Take a look at the answer to this question: stackoverflow.com/questions/519633/… Commented Jun 23, 2011 at 16:10
  • 1
    If you're using a 64-bit machine, try using a 64-bit python build. Commented Jun 23, 2011 at 16:12
  • I dont understand why are you loading all the contents of all the files in cFile_list. What exactly want to do with the contents of the file? I think perhaps you want to save the contents of each file to another corresponding file after replacing '\n's with ''. If this is what you want to do then you can save the contents to a new file there itself in the for loop and then you wont get any memory error no matter for how many files you do this. Commented Jun 23, 2011 at 16:57
  • @ Kris K. I think it is not the size of the file which is causing memory problems, but it is size of cFile_list object which is growing enormously after every loop (see my previous comment). So reading in chunks wont help. In fact, the question itself is vague it seems. Commented Jun 23, 2011 at 17:00

3 Answers 3

2

You are trying to load too much into memory at once. This can be because of the process size limit (especially on a 32 bit OS), or because you don't have enough RAM.

A 64 bit OS (and 64 bit Python) would be able to do this ok given enough RAM, but maybe you can simply change the way your program is working so not every page is in RAM at once.

What is cFile_list used for? Do you really need all the pages in memory at the same time?

Sign up to request clarification or add additional context in comments.

2 Comments

cFile_list is a big list of documents. It ends up becoming the training and test set for a Naive Bayes Classifier. What would the alternative be as far as not having everything in memory at the same time?
@Greg, can you change your program to loop through the filenames. For each filename, read the file, clean up the file, feed file to the classifier, close the file. This way only one file needs to be in ram at once.
1

Consider using generators, if possible in your case:

file_list = []
for file_ in files:
    file_list.append(line.replace('\n', '') for line in open(file_))

file_list now is a list of iterators which is more memory-efficient than reading the whole contents of each file into a string. As soon es you need the whole string of a particular file, you can do

string_ = ''.join(file_list[i])

Note, however, that iterating over file_list is only possible once due to the nature of iterators in Python.

See http://www.python.org/dev/peps/pep-0289/ for more details on generators.

2 Comments

Ok thanks. I was able to load all the files, but when I try to do the join, i get the following: ValueError: I/O operation on closed file
My fault: The files will be closed outside of with's scope. I edited the code. Note that you should also ensure that opening the file does not fail.
0

This is not effective way to read whole file in memory.

Right way - get used to indexes.

Firstly you need to complete dictionary with start position of each line (key is line number, and value – cumulated length of previous lines).

t = open(file,’r’)
dict_pos = {}

kolvo = 0
length = 0
for each in t:
    dict_pos[kolvo] = length
    length = length+len(each)
    kolvo = kolvo+1

and ultimately, aim function:

def give_line(line_number):
    t.seek(dict_pos.get(line_number))
    line = t.readline()
    return line

t.seek(line_number) – command that execute pruning of file up to line inception. So, if you next commit readline – you obtain your target line. Using such approach (directly to handle to necessary position of file without running through the whole file) you are saving significant part of time and can handle huge files.

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.