0

I have a small files with.csv.gz compressed format in gcs bucket and have mounted it and created external volumes on top of it in databricks(unity catalog enabled). So when I try to read a file with just 100 KB size its throwing below error.

[FAILED_READ_FILE.NO_HINT] Error while reading file , SQLSTATE: KD001

Code I'm using:

df = spark.read.option("header", "true").csv("dbfs:/Volumes/file.csv.gz")

Can anyone help me out with this ?

4
  • This is the only error traceback from databricks? Commented Oct 3 at 1:44
  • Yes, aprt from that it points to the core aprk python file error which i added below, which is atttached in the query now Commented Oct 3 at 10:23
  • 2363 raise SparkConnectGrpcException( 2364 "Python versions in the Spark Connect client and server are different. " 2365 "To execute user-defined functions, client and server should have the " (...) 2373 "sqlState", default=SparkConnectGrpcException.CLIENT_UNEXPECTED_MISSING_SQL_STATE), 2374 ) from None 2375 # END-EDGE -> 2377 raise convert_exception( Commented Oct 3 at 10:29
  • Based on the error above, it seems that you are using Spark Connect which is causing the issue, can you please create a MRE? Thx Commented Oct 5 at 11:51

0

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.