Trying to use dask's read_csv on file where pandas's read_csv like this
dd.read_csv('data/ecommerce-new.csv')
fails with the following error:
pandas.errors.ParserError: Error tokenizing data. C error: EOF inside string starting at line 2
The file is csv file of scraped data using scrapy with two columns, one with the url and the other with the html(which is stored multiline using " as delimiter char). Being actually parsed by pandas means it should be actually well-formatted.
html,url
https://google.com,"<a href=""link"">
</a>"
Making the sample argument big enough to load the entire file in memory seems to work, which makes me believe it actually fails when trying to infer the datatypes(also there's this issue which
should have been solved https://github.com/dask/dask/issues/1284)
Has anyone encountered this problem before? Is there a fix/workaround?
EDIT: Apparently this is a known problem with dask's read_csv if the file contains a newline character between quotes. A solution I found was to simply read it all in memory:
dd.from_pandas(pd.read_csv(input_file), chunksize=25)
This works, but at the cost of parallelism. Any other solution?
df = dd.read_csv(csvfile, delimiter="\t", quoting=csv.QUOTE_NONE, encoding='utf-8')<- the quoting part is important here; you can also tryquoting=3anderror_bad_lines=False, but let's first try to see what the first gives you. :)pandas.errors.ParserError: Error tokenizing data. C error: Expected 1 fields in line 30, saw 2pandas.errors.ParserError: Error tokenizing data. C error: Expected 2 fields in line 21, saw 13blocksize=Noneindd.read_csv, which will read the whole file into one chunk - but you still get parallelism between files. Dask determines chunks by looking for'\n', and you cannot know if one is an actual line separator without serially parsing through the file up to that point.