So I personally favor coding dynamic programming solutions using a top-down approach. Especially in python because it lends itself to a rather easy recursive implementation using the cache decorator, as I explain below.
And for the Abbreviation problem on Hackerrank, I've typed up the following solution (which is really just a recursive solution taking advantage of the implicit memoization we can get from the cache decorator).
def abbreviation(a, b):
n, m = len(a), len(b)
@lru_cache(maxsize=n*m)
def abbreviation_helper(i,j):
global a,b
if i < j:
return False
if j == -1:
if a == -1 or all([a[x].islower() for x in range(i)]):
return True
if a[i].isupper() and a[i] != b[j]:
return False
if a[i].upper() == b[j]:
return abbreviation_helper(i-1, j-1) or abbreviation_helper(i-1, j)
else:
return abbreviation_helper(i-1, j)
ret = abbreviation_helper(n-1,m-1)
if ret:
return "YES"
return "NO"
This works fine and gives the correct solution; however, it times out for a couple of them (namely, test cases: 10,12,13,14). My question is if there's something I can do to further optimize this so that it doesn't time out (keeping the logic and approach of the code mostly the same). I would've thought that the cache decorator would have been enough to make sure it runs in suitable time by avoiding redundant computations. However, maybe it's possible there are some extra calls I'm making that are unnecessary. If anyone has any thoughts, please let me know.
As far as big o notation goes, we know this is asymptotically the same as using a bottom up approach, so what's causing this one to time out?
Ive tried the solution using the bottom up approach which works fine. I also originally had a solution that made calls with arguments being the sliced strings, but changed the calls to accept integers as argument as an attempt to further optimize this. Was expecting that to suffice.