29

I have successfully established the JDBC connection and can successfully execute statements like "use warehouse ...". When I try to run any SELECT statement I get the following error:

net.snowflake.client.jdbc.SnowflakeSQLLoggedException: JDBC driver internal error: Fail to retrieve row count for first arrow chunk: null.

I am able to see that my request was successful, and returned the expected data in the snowflake UI.

The error occurs on this line: rs = statement.executeQuery("select TOP 1 EVENT_ID from snowflake.account_usage.login_history");

The statement was able to execute queries prior to this line and the result set was as expected. Any insight would be appreciated!

1
  • 1
    It only fails with that query? What about other selects from a table? Commented May 6, 2021 at 5:19

7 Answers 7

32

This could happen due to several reasons:

  1. What JDK version are you using? JDK16 has introduced strong encapsulation of JDK internals (see JEP 396) If you're using JDK16 try setting at JVM level on startup:
-Djdk.module.illegalAccess=permit 

This is a workaround until we get a fix for the following Apache Arrow issue ARROW-12747

  1. If you use an application that uses JDBC to connect to Snowflake, then the application might not interpret correctly the results. Try switching back to JSON rather than ARROW format and see if that fixes it. This can be done at session level by running:
ALTER SESSION SET JDBC_QUERY_RESULT_FORMAT='JSON'
Sign up to request clarification or add additional context in comments.

5 Comments

Hi, can you explain what -Djdk.module.illegalAccess=permit does? I had this same problem in dbeaver in arch linux with java 16 openjdk connecting to snowflake. If you can explain more about option 1, and also what you mean by JDKs above 11 might cause unexpected behavior it would make this answer more complete and helpful.
@sergiu thank you for this answer. Its a bit problematic for those of us using the snowflake drivers for R, however, because we can't specify jvm level or alter session on startup, which means we have to downgrade our jvm version system-wide to get a connection. Can we hope that there will be progress soon on the issue of arrow compatibility?
@Bob The arrow issue has been opened directly towards Apache, so we're waiting for feedback from them.
Worked like a charm export JAVA_OPTS="-Djdk.module.illegalAccess=permit"
Would it not make things inefficient by not utilising newer ARROW format?
21

Using DBeaver to connect snowflake and had the same issue. It is resolved by setting the session parameter in each editor window as following:

ALTER SESSION SET JDBC_QUERY_RESULT_FORMAT='JSON';

This solution can be automated by configuring boot-strap queries in connection settings->Initialization. With every new-editor window this session parameter will preset during initialization.

3 Comments

Thank you so much! This solves the problem. I had to use the intel version of Dbeaver for almost 3 months and the query fetch was painfully slow. The silicon binary with this fix is much faster now
Has anyone solved this issue with Databricks and Dbeaver? I have tried this solution but 'ALTER SESSION SET JDBC_QUERY_RESULT_FORMAT='JSON';' does not work with Databricks
This also helps if using a recent IntelliJ IDEA / DataGrip IDE to connect. When you configure the data source, changing the JVM properties for whatever reason does not work. Instead, in the "Options" tab for the Snowflake datasource, add the following string: ALTER SESSION SET JDBC_QUERY_RESULT_FORMAT='JSON';.
5

You can add the following 2 settings in the following file (macOS) /Applications/DBeaver.app/Contents/Eclipse/dbeaver.ini

-Djdk.module.illegalAccess=permit --add-opens=java.base/java.nio=ALL-UNNAMED

Information from: https://support.dbvis.com/support/solutions/articles/1000309803-snowflake-fail-to-retrieve-row-count-for-first-arrow-chunk-


Another alternative that worked for me on my MAC M1, is to use JDK11

brew install openjdk@11

Edit: /Applications/DBeaver.app/Contents/Eclipse/dbeaver.ini this line: ../Eclipse/jre/Contents/Home/bin/java change to /opt/homebrew/opt/openjdk@11/bin/java

Restart dbeaver

2 Comments

Using the second alternative here with JDK11 worked for me. Mahalo
This works for me when running my own java appication - I just added the suggested commands to the command-line arguments, problem solved
3

I hit the same problem, and was able to get it working by downgrading to Java 11 for version

[net.snowflake/snowflake-jdbc "3.13.8"]

4 Comments

I too encountered the same problem OP did. Only I am connecting R to Snowflake. I have tried your solution (downgrading Java to 11.0.13 and Snowflake .jar to 3.13.8. I am able to run dbListTables(con) and am presented with a list of tables but when running a query e.g. dbGetQuery(con, "select * from table limit 10") I get the error below
Note: method with signature ‘DBIConnection#character’ chosen for function ‘dbListFields’, target signature ‘JDBCConnection#character’. "JDBCConnection#ANY" would also be valid Error in .verify.JDBC.result(r, "Unable to retrieve JDBC result set", : Unable to retrieve JDBC result set JDBC ERROR: SQL compilation error: Object '"dwh_db.visitor.table"' does not exist or not authorized. Statement: SELECT * FROM "dwh_db.visitor.table" LIMIT 0
Solved using a particular combination of version of Java and Snowflake's .jar file link
I could also fix the issue by using snowflake-jdbc 3.10.3 and Java Zulu 11.66
3

Before executing actual query you need to set this:

statement.executeQuery("ALTER SESSION SET JDBC_QUERY_RESULT_FORMAT='JSON'");

1 Comment

where should this be run ? I get error when running Flyway which is installed via brew install flyway
2

The official solution from snowflake is to configure an extra property in your datasource configurations: https://community.snowflake.com/s/article/SAP-BW-Java-lang-NoClassDefFoundError-for-Apache-arrow

Customer can use this property (jdbc_query_result_format=json) in datasouce property of Application server or session property in application like

Statement = connection.createStatement();
Statement.executeQuery("ALTER SESSION SET JDBC_QUERY_RESULT_FORMAT='JSON'");

which will use result format as JSON instead of Arrow and which will avoid the above error.

1 Comment

This worked for me JDBC_QUERY_RESULT_FORMAT='JSON'. I added that at the end of connection string. Thank you for the solution.
2

This happens on modern JVMs due to Arrow wanting to do low level memory allocations. It logs to stdout that it wants the JVM to be started with --add-opens=java.base/java.nio=ALL-UNNAMED.

The fixes that are good for JDK17 are described on the Arrow Docs as:

# Directly on the command line
java --add-opens=java.base/java.nio=ALL-UNNAMED -jar ...

# Indirectly via environment variables
env _JAVA_OPTIONS="--add-opens=java.base/java.nio=ALL-UNNAMED" java -jar ...

So if you are using intellij/Docker setting the env var of _JAVA_OPTIONS to be --add-opens=java.base/java.nio=ALL-UNNAMED is probably the best solution.

UPDATE: we migrated to Java 21 with no issues. I noticed that there can be challenges between many different env var overrides as per Difference between _JAVA_OPTIONS, JAVA_TOOL_OPTIONS and JAVA_OPTS so i would suggest trying to the direct command line approach to when debugging trying to set env vars.

2 Comments

This doesn't seem to work with java 21 unfortunately.
we upgraded to java 21 and we still use that setting. i find no evidence that we configured snowflake to use json not arrow as per their JDBC docs. I noticed that in stackoverflow.com/q/28327620/329496 it says that there are different behaviours of the different env vars so maybe that could be your issue. try a direct command line setting to the jvm to debug.

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.