1

I am currently trying to do some optimization for locations on a map using OpenMDAO 1.7.2. The (preexisting) modules that do the calculations only support integer coordinates (resolution of one meter).

For now I am optimizing using an IndepVarComp for each direction each containing a float vector. These values are then rounded before using them, but this is quite inefficient because the solver mainly tries variations smaller below one.

When I attempt to initialize an IndepVarComp with an integer vector the first iteration works fine (uses inital values), but in the second iteration fails, because the data in IndepVarComp is set to an empty ndarray.

Looking through the OpenMDAO source code I found out that this is because

indep_var_comp._init_unknowns_dict['x']['size'] == 0

which happens in Component's _add_variable() method whenever the data type is not differentiable.

Here is an example problem which illustrates how defining an integer IndepVarComp fails:

from openmdao.api import Component, Group, IndepVarComp, Problem, ScipyOptimizer

INITIAL_X = 1

class ResultCalculator(Component):
    def __init__(self):
        super(ResultCalculator, self).__init__()
        self.add_param('x', INITIAL_X)
        self.add_output('y', 0.)

    def solve_nonlinear(self, params, unknowns, resids):
        unknowns['y'] = (params['x'] - 3) ** 2 - 4

problem = Problem()
problem.root = Group()
problem.root.add('indep_var_comp', IndepVarComp('x', INITIAL_X))
problem.root.add('calculator', ResultCalculator())
problem.root.connect('indep_var_comp.x', 'calculator.x')

problem.driver = ScipyOptimizer()
problem.driver.options['optimizer'] = 'COBYLA'
problem.driver.add_desvar('indep_var_comp.x')
problem.driver.add_objective('calculator.y')

problem.setup()
problem.run()

Which fails with

ValueError: setting an array element with a sequence.

Note that everythings works out fine if I set INITIAL_X = 0..

How am I supposed to optimize for integers?

4
  • please post a small test script that displays the problem Commented Dec 19, 2016 at 13:51
  • in your example, you're using a continuous optimizer for integer variables. Though the error message you're getting is not very clear, this simply won't work. You need to use a different kind of optimization algorithm entirely if you want integer variables. Was your choice of optimizer only to make the example give the error, or do you expect to actually use this one for the real problem? Commented Dec 20, 2016 at 2:34
  • @JustinGray Actually yes, it was one I was taking into consideration. I was. not aware that it is strictly for continuous problems. I was merely looking for an optimizer that allows for constraints and does not use gradients. Commented Dec 20, 2016 at 8:40
  • My problem is in theory continuous, but in practice position variations only lead to significant changes at a resolution of about 10m. That is why the convention we usually specify the position in integer numbers. I was trying to also use an integer in the optimizer because a) the optimizer seemed to waste a lot of time varying positions by very small amounts even in the beginning (far from the optimum) b) I am not 100% sure that there is no rounding to integer somewhere in the rather complex calculations (which are reused from a tool not even written in Python). Commented Dec 20, 2016 at 8:46

1 Answer 1

1

if you want to use integer variables, you need to pick a different kind of optimizer. You won't be able to force COBYLA to respect the integrality. Additionally, if you do have some kind of integer rounding causing discontinuities in your analyses then you really can't be using COBYLA (or any other continuous optimizer) at all. They all make a fundamental assumption about smoothness of the function which you would be violating.

It sounds like you should possibly consider using a particle swarm or genetic algorithm for your problem. Alternatively, you could focus on making the analyses smooth and differentiable and scale some of your inputs to get more reasonable resolution. You can also loosen the convergence tolerance of the optimizer to have it stop iterating once it gets below physical significance in your design variables.

Sign up to request clarification or add additional context in comments.

3 Comments

Thank you, that explains the cause of the problem. Is there an overview of which criteria must be fulfilled by objective / constraint functions for the optimizer?
Also I need to work on windows but did not yet manage to install pyoptsparse there. From the sources of ScipyOptimizer I assume that COBYLA and SLSQP are the only two optimizers available to me that can handle constraints (which means using integers is not possible). Is this correct?
There are no optimizers in pyoptsparse that are set up to handle integers, except NSGA-II as far as I know. Certainly COBYLA and SLSQP can't. You could use NSGA-II and formulate your constraints as an added penalty on the objective function. This is a common approach, though it has a few challenges that need to be addressed carefully.

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.