"What is using all this memory?"
There's overhead for Python objects. See how many bytes some strings actually take:
Python 2:
>>> import sys
>>> map(sys.getsizeof, ('', 'a', u'ä'))
[21, 22, 28]
Python 3:
>>> import sys
>>> list(map(sys.getsizeof, ('', 'a', 'ä')))
[25, 26, 38]
"What is a more efficient way to do this memory wise?"
In comments you said there are lots of duplicate values, so string interning (storing only one copy of each distinct string value) might help a lot. Try this:
Python 2:
markers.append(map(intern, line.rstrip().split('\t')))
Python 3:
markers.append(list(map(sys.intern, line.rstrip().split('\t'))))
Note I also used line.rstrip() to remove the trailing \n from the line.
Experiment
I tried
>>> x = [str(i % 1000) for i in range(10**7)]
and
>>> import sys
>>> x = [sys.intern(str(i % 1000)) for i in range(10**7)]
in Python 3. The first one takes 355 MB (looking at the process in Windows Task Manager). The second one takes only 47 MB. Furthermore:
>>> sys.getsizeof(x)
40764032
>>> sum(map(sys.getsizeof, x[:1000]))
27890
So 40 MB is for the list referencing the strings (no surprise, as there are ten million references of four bytes each). And the strings themselves total only 27 KB.
Further improvements
As seen in the experiment, much of your RAM usage might be not from the strings but from your list object(s). Both your markers list object as well as all those list objects representing your rows. Especially if you're using 64-bit Python, which I suspect you do.
To reduce that overhead, you could use tuples instead of lists for your rows, as they're more light-weight:
sys.getsizeof(['a', 'b', 'c'])
48
>>> sys.getsizeof(('a', 'b', 'c'))
40
I estimate your 2 GB file has 80 million rows, so that would save 640 MB RAM. Perhaps more if you run 64-bit Python.
Another idea: If all your rows have the same number of values (I assume three), then you could ditch those 80 million row list objects and use a one-dimensional list of the 240 million string values instead. You'd just have to access it with markers[3*i+j] instead of markers[i][j]. And it could save a few GB RAM.
csvmodule. It will probably better handle how the file is read and cached.