7

I wrote about 50 classes that I use to connect and work with websites using mechanize and threading. They all work concurrently, but they don't depend on each other. So that means 1 class - 1 website - 1 thread. It's not particularly elegant solution, especially for managing the code, since lot of the code repeats in each class (but not nearly enough to make it into one class to pass arguments, as some sites may require additional processing of retrieved data in middle of methods - like 'login' - that others might not need). As I said, it's not elegant -- But it works. Needless to say I welcome all recommendations how to write this better without using 1 class for each website approach. Adding additional functionality or overall code management of each class is a daunting task.

However, I found out, that each thread takes about 8MB memory, so with 50 running threads we are looking at about 400MB usage. If it was running on my system I wouldn't have problem with that, but since it's running on a VPS with only 1GB memory, it's starting to be an issue. Can you tell me how to reduce the memory usage, or are there any other way to to work with multiple sites concurrently?

I used this quick test python program to test if it's the data stored in variables of my application that is using the memory, or something else. As you can see in following code, it's only processing sleep() function, yet each thread is using 8MB of memory.

from thread import start_new_thread
from time import sleep

def sleeper():
    try:
        while 1:
            sleep(10000)
    except:
        if running: raise

def test():
    global running
    n = 0
    running = True
    try:
        while 1:
            start_new_thread(sleeper, ())
            n += 1
            if not (n % 50):
                print n
    except Exception, e:
        running = False
        print 'Exception raised:', e
    print 'Biggest number of threads:', n

if __name__ == '__main__':
    test()

When I run this, the output is:

50
100
150
Exception raised: can't start new thread
Biggest number of threads: 188

And by removing running = False line, I can then measure free memory using free -m command in shell:

             total       used       free     shared    buffers     cached
Mem:          1536       1533          2          0          0          0
-/+ buffers/cache:       1533          2
Swap:            0          0          0

The actual calculation why I know it's taking about 8MB per thread is then simple by dividing dividing the difference of memory used before and during the the above test application is running, divided by maximum threads it managed to start.

It's probably only allocated memory, because by looking at top, the python process uses only about 0.6% of memory.

9
  • What's taking up the memory? I'd venture to guess that it's the data you're extracting from the sites. If that's the case, then there's probably not a lot that you could do short of throttling the number of executing threads. Commented Jan 9, 2012 at 23:30
  • How do you exactly measure memory usage? I'd guess, that those 8MB are not really allocated to each single thread. A huge part of those 8MB may be shared between the threads (just a guess..)? Commented Jan 9, 2012 at 23:34
  • 1
    Is this a hosting? what about ulimit -u ? and ulimit -a? Commented Jan 9, 2012 at 23:52
  • 1
    @Andrew: So, you roughly measured the overhead of a single thread in python. After all, 8MB sounds reasonable these days ... Commented Jan 9, 2012 at 23:54
  • 1
    @andrew, ulimit -s may be fixed whith limit pam module they are a soft and hard value for each parameter. Also you can assign a custom limit for each user in bash.rc. For example take a look to Oracle documentation, Oracle server needs to customize this parameters to work properly. Commented Jan 10, 2012 at 8:25

4 Answers 4

6
Sign up to request clarification or add additional context in comments.

2 Comments

This. If resource management is an issue, just have a thread pool and tune the pool limit.
Thank you! It looks like Gevent is what I've been looking for.
2

Using "one thread per request" is OK and easy for many use-cases. However, it will require a lot of ressources (as you experienced).

A better approach is to use an asynchronuous one, but unfortunately it is a lot more complex.

Some hints into this direction:

5 Comments

Thanks, much appreciated. I read about Twisted before, but sadly I don't know much about it and by the looks of it I wouldn't be able to use mechanize with it. I'll take a look if I could make mechanize work with asyncore.
After all, a "perfect" solution would be a mix of thread pools with one thread per CPU core (to utilize them for processing tasks) and asynchronuous IO. A practical solution will depend on your actual application code. Maybe, even a simple solution based on select will do it for you.
This means: in your thread: send a bunch of requests, then enter a loop which will select on the appropriate sockets, and handle any incoming data one by one... and so on. After all, the OS cares about socket IO anyway, your task is to interface with the OS in the most efficient way possible.
Thing is, the code I have is quite simple really. Each subclass is rather same, with just different URLs, different names, values, etc and occasionally some different way of processing of the data. They do not depend on each other at all. All I want is to run them concurrently, wait for them to complete the work and then exit. All solutions I read are for more complex things I think. I can't believe someone haven't developed a module for simple asynchronous/threaded execution of classes or functions that do not depend on each other at all.
@Andrew: All the required code & framework exists, you just have to use it now ;)
1

The solution is to replace code like this:

1) Do something.
2) Wait for something to happen.
3) Do something else.

With code like this:

1) Do something.
2) Arrange it so that when something happens, something else gets done.
3) Done.

Somewhere else, you have a few threads that do this:

1) Wait for anything to happen.
2) Handle whatever happened.
3) Go to step 1.

In the first case, if you're waiting for 50 things to happen, you have 50 threads sitting around waiting for 50 things to happen. In the second case, you have one thread waiting around that will do whichever of those 50 things need to get done.

So, don't use a thread to wait for a single thing to happen. Instead, arrange it so that when that thing happens, some other thread will do whatever needs to get done next.

Comments

0

I'm no expert on Python, but maybe have a few thread pools which control the total number of active threads, and hands off a 'request' to a thread once it's done with the previous thread. The request doesn't have to be the full thread object, just enough data to complete whatever the request is.

You could also structure it so you have thread pool A with N threads pinging the website, once the data is retrieved, hand it off the data to thread pool B with Y threads crunching the data.

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.