0

I've been stuck with this problem in python for some time now: I need to create a list of methods inside a class that do almost the same thing but with different instance member variables. So my first attempt is something like this:

from functools import partial


class Operations:
    def __init__(self):
        self.a = 10
        self.b = 20
        self.c = 30
        self.add_operation_1 = partial(self.generic_add_1, 'a', 'b')
        self.add_operation_2 = partial(self.generic_add_1, 'a', 'c')

    def generic_add_1(self, first, second):
        first_value = getattr(self, first)
        second_value = getattr(self, second)
        setattr(self, first, first_value + second_value)


instance = Operations()
instance.add_operation_1()
print(instance.a)
# Should print 30
instance.add_operation_2()
print(instance.a)
# Should print 60

As you can see I use getattr and setattr to reference the attributes I need to change.

This works, but is really slow because partial only keeps the parameters and when the function is called, it passes them to the original function. Also, i'm not sure about this, but aren't getattr and setattr a bit slower than using something like object.property

So i managed to get a second attempt:

class Operations:
    def __init__(self):
        self.a = 10
        self.b = 20
        self.c = 30
        self.add_operation_1 = self.generic_add('a', 'b')
        self.add_operation_2 = self.generic_add('a', 'c')

    def generic_add(self, first, second):
        first_value = getattr(self, first)
        second_value = getattr(self, second)

        def real_operation():
            setattr(self, first, first_value + second_value)

        return real_operation


instance = Operations()
instance.add_operation_1()
print(instance.a)

# Should print 30

instance.add_operation_2()
print(instance.a)
# Should print 60 but returns 40 instead!!!

This time i didn't use partials but a closure instead. The main advantage is that getattr is only executed once when create the instance object and not when calling the methods, but I can't find a way to get rid of setattr. And as a side effect this doesn't work as I expected. getattr gets the value of the property at the beginning, so any changes to those properties won't be seen by the returned functions.

So now i'm kind of stuck. Is there a way to generate a method like this:

def expected_function(self):
    self.a = self.a + self.b

given the properties names?

Thanks.

1
  • "The main advantage is that getattr is only executed once" - that's not an advantage. That's a bug. Commented Mar 2, 2018 at 23:14

3 Answers 3

2
def function_generate(v, s1, s2):
    def f():
        v[s1] += v[s2]
    return f

class Operations:
    def __init__(self):
        self.a = 10
        self.b = 20
        self.c = 30
        namespace = vars(self)
        self.add_operation_1 = function_generate(namespace, 'a', 'b')
        self.add_operation_2 = function_generate(namespace, 'a', 'c')


instance = Operations()
instance.add_operation_1()
print(instance.a)
# Should print 30
instance.add_operation_2()
print(instance.a)
# Should print 60
Sign up to request clarification or add additional context in comments.

3 Comments

I like this, but instead of using __dict__ directly, do something like namespace = vars(self) then namespace[s1] += namespace[s2] At the very least, since performance seems to be an issue, you're avoiding an extra attribute look-up, plus I think it just reads nicer
@PaulCornelius this an elegant solution. Although when using indexing v[s1] an extra bytecode is generated: BINARY_SUBSCR . This is caused because python needs to convert string "s1" to property s1. Could there be a way we can do that convertion on initialization instead of function call?
I don't know what you mean by convert string s1 to property s1. v[s1] uses the string s1 to look up a key in a dictionary. You must find the two values to be added and find where to put the answer - I don't see how that can possibly be avoided. And if you're going to obsess over every extra byte code, you're in deep trouble anyway.
0

As pointed out by dospro in his comment, getattr being executed only once is a bug. Your closure will use outdated values in subsequent calls.

About performance, you should gain some by using __dict__ attribute directly instead of using setattr / getattr.

To get a hint of why accessing __dict__ directly is faster than getattr / setattr, we can look at the generated bytecode:

self.__dict__['a'] = 1

0 LOAD_CONST               1 (1)
2 LOAD_FAST                0 (self)
4 LOAD_ATTR                0 (__dict__)
6 LOAD_CONST               2 ('a')
8 STORE_SUBSCR

setattr(self, 'a', 1)

0 LOAD_GLOBAL              0 (setattr)
2 LOAD_FAST                0 (self)
4 LOAD_CONST               1 ('a')
6 LOAD_CONST               2 (1)
8 CALL_FUNCTION            3
10 POP_TOP

setattr translates to a function call, while writing to __dict__ is one store operation.

Comments

0

Ok after doing a lot of experimenting I found a solution, although it's quite unelegant.

def generic_eval_add(self, first, second):
        my_locals = {
            'self': self
        }
        string_code = """def real_operation():
    self.{0} = self.{0} + self.{1}""".format(first, second)

        print(string_code)
        exec(string_code, my_locals)

        return my_locals['real_operation']

Since this can be evaluated at initialization, it makes exactly what I needed. The big trade offs are elegance, readability, error handling, etc. Y think Paul Cornelius solution is good enough for this use case. Although I may consider jinja templating for generating python code

Thanks for you help.

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.