I have a small files with.csv.gz compressed format in gcs bucket and have mounted it and created external volumes on top of it in databricks(unity catalog enabled). So when I try to read a file with just 100 KB size its throwing below error.
[FAILED_READ_FILE.NO_HINT] Error while reading file , SQLSTATE: KD001
Code I'm using:
df = spark.read.option("header", "true").csv("dbfs:/Volumes/file.csv.gz")
Can anyone help me out with this ?