We can approach the problem in terms of generating sets of partial answers: num k 1 is the set {k}, num 2 is the set of all numbers generated by combining num k 1 with num k 1, num k 3 is the set of all numbers generated by combining num k 1 with num k 2, and so forth. Each step uses the sets computed in previous steps, and applies one operator. Here are the first 3 steps. Note that each number is computed using two previously generated numbers and one operator.
num 3 1 = {3}
num 3 2 = {3-3=0, 3/3=1, 3=3, 3+3=6, 3*3=9}
num 3 3 = {3-9=-6, 3-6=-3, 1-3=-2, 0=0, 1/3=1/3, 3/6=1/2, 1=1, 6/3=2, 3=3, 3+1=4, 6=6, 9=9, 3+9=12, 3*6=18, 3*9=27}
Your list-based num function is recomputing previous steps for two reasons.
- Step
n recomputes all previous steps from scratch. For instance, num x 4 will compute num x 1, num x 2, and num x 3. Then, num x 3 will compute num x 1 and num x 2 again.
- There is a recursive call to
num in an inner loop. Specifically, you have [... | ... a <- num x i, b <- num x (n-i) ]. This will recompute num x (n-i) for each value of a. You can move the recursive call out of the inner loop by writing [... | ... let b_input = num x (n-i), a <- num x i, b <- b_input]. (Compiler optimizations may do this automatically, but you shouldn't rely on it.)
Instead of recomputing previous results, you should save and reuse them. The easiest way to do this is to save a list of previous results as the algorithm proceeds. Converting your code to save previous results is an instance of a more general technique known as memoization.
Another source of inefficiency is num'', which searches the entire list to remove duplicates in quadratic time. Duplicates cam be removed in n*log(n) time using sets from the Data.Set module.
In summary, in num k n, do not recursively call num because this will do redundant work. Instead of recursive calls, save the list of results from num k 1, num k 2, ... num k (n-1) and pass this list to num k n. Also, use the Data.Set module to remove duplicate values instead of calling num''.
num''(not a very good function name IMO) that essentially ensures that there are no duplicates in your list. Why not useData.Set? It's a much more efficient implementation of an unordered collection of unique elements, since internally it's implemented as a binary tree. You could probably save a lot of time just withnum'' = Data.Set.toList . Data.Set.fromList, unlessnum''is meant to accept infinite streams. Since it looks like you have a finite search space this should be safe.