In recent Python 3 (64-bit) versions, string instances take up 49+ bytes. But also keep in mind that if you use non-ASCII characters, the memory usage jumps up even more:
>>> sys.getsizeof('t')
50
>>> sys.getsizeof('я')
76
Notice how even if one character in a string is non-ASCII, all other characters will take up more space (2 or 4 bytes each):
>>> sys.getsizeof('t12345')
55 # +5 bytes, compared to 't'
>>> sys.getsizeof('я12345')
86 # +10 bytes, compared to 'я'
This has to do with the internal representation of strings since Python 3.3. See PEP 393 -- Flexible String Representation for more details.
Python, in general, is not very memory efficient, when it comes to having lots of small objects, not just for strings. See these examples:
>>> sys.getsizeof(1)
28
>>> sys.getsizeof(True)
28
>>> sys.getsizeof([])
56
>>> sys.getsizeof(dict())
232
>>> sys.getsizeof((1,1))
56
>>> sys.getsizeof([1,1])
72
Internalizing strings could help, but make sure you don't have too many unique values, as that could do more harm than good.
It's hard to tell how to optimize your specific case, as there is no single universal solution. You could save up a lot of memory if you somehow serialize data from multiple items into a single byte buffer, for example, but then that could complicate your code or affect performance too much. In many cases it won't be worth it, but if I were in a situation where I really needed to optimize memory usage, I would also consider writing that part in a language like Rust (it's not too hard to create a native Python module via PyO3 for example).