10

Trying to use dask's read_csv on file where pandas's read_csv like this

dd.read_csv('data/ecommerce-new.csv')

fails with the following error:

pandas.errors.ParserError: Error tokenizing data. C error: EOF inside string starting at line 2

The file is csv file of scraped data using scrapy with two columns, one with the url and the other with the html(which is stored multiline using " as delimiter char). Being actually parsed by pandas means it should be actually well-formatted.

html,url
https://google.com,"<a href=""link"">
</a>"

Making the sample argument big enough to load the entire file in memory seems to work, which makes me believe it actually fails when trying to infer the datatypes(also there's this issue which should have been solved https://github.com/dask/dask/issues/1284)

Has anyone encountered this problem before? Is there a fix/workaround?

EDIT: Apparently this is a known problem with dask's read_csv if the file contains a newline character between quotes. A solution I found was to simply read it all in memory:

dd.from_pandas(pd.read_csv(input_file), chunksize=25)

This works, but at the cost of parallelism. Any other solution?

4
  • 2
    could you try something like df = dd.read_csv(csvfile, delimiter="\t", quoting=csv.QUOTE_NONE, encoding='utf-8') <- the quoting part is important here; you can also try quoting=3 and error_bad_lines=False, but let's first try to see what the first gives you. :) Commented Aug 18, 2017 at 9:26
  • Tried it, not sure why exactly the delimiter was '\t' as i'm using commas for separation, but nonetheless, it yields this error now: pandas.errors.ParserError: Error tokenizing data. C error: Expected 1 fields in line 30, saw 2 Commented Aug 18, 2017 at 10:28
  • And when using comma as a delimiter: pandas.errors.ParserError: Error tokenizing data. C error: Expected 2 fields in line 21, saw 13 Commented Aug 18, 2017 at 10:41
  • 4
    You will need blocksize=None in dd.read_csv, which will read the whole file into one chunk - but you still get parallelism between files. Dask determines chunks by looking for '\n', and you cannot know if one is an actual line separator without serially parsing through the file up to that point. Commented Aug 18, 2017 at 12:55

1 Answer 1

2

For people coming here in 2020, the dd.read_csv works directly for newlines inside quotes. It has been fixed. Update to the latest version of Dask (2.18.1 and above) to get these benefits.

import dask.dataframe as dd
df = dd.read_csv('path_to_your_file.csv')
print(df.compute())

Gives,

                 html                    url
0  https://google.com  <a href="link">\n</a>

OR

For people who want to use an older version for some reason, as suggested by @mdurant you might wanna pass blocksize=None to dd.read_csv which will be at a cost of parallel loading.

Sign up to request clarification or add additional context in comments.

4 Comments

Hi, could you please specify the version in which it hase been fixed, it would be very helpful.
@ArsenyNerinovsky I don't know the exact version where the fix was made, But I have 2.18.1 and it works in this version.
still having issues in 2020 with a freshly installed copy of dask :(
Checked it with a fresh install of dask==2.30.0 and it works. I think you might have some other problem. @Sam

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.