The most basic algorithm is probably gradient gescent which can also be used for maximization. It uses the gradient of the function to optimize and (starting from an initial point) "goes" into the most promising direction. For maximization this is the direction of the gradient. Look at the examples on Wikipedia. That should give you an idea how it works. Note that in is pure variant, it always gets stuck in local optima. I don't know if there is a Java library for this. I programmed it myself some time ago, but this code is very specific to the task.
Another thing to look at is the Nelder-Mead method. It is also susceptible to local minima and requires a smooth function. It is implemented in Flanagan's Java Scientific Library. Basically you have to extend MaximizationFunction and implement the function
public double function(double[] param)
which contains your function above. This method evaluates your function given the parameters param (in your case two values: x and y) and returns the function value.
Then you can use the whole program like this:
//Create instance of Maximisation
Maximization max = new Maximization();
// Create instace of class holding function to be maximised
YourFunction funct = new YourFunction();
// initial estimates (your initial x and y)
double[] start = {100.0, 3000.0};
// initial step sizes (take a good guess)
double[] step = new double[start.length];
Arrays.fill(step, 100);
// convergence tolerance
double ftol = 0.0001;
// maximal number of iterations
int maxIter = 5000;
// Nelder and Mead maximisation procedure
max.nelderMead(funct, start, step, ftol, maxIter);
// result of maximization
double result = max.getMaximum()
Since your variables have restrictions, you should add some constraints via the addConstraint method of Maximization.